arxiv_id
stringlengths 9
12
| paper
stringlengths 2.65k
90.8k
| targets
sequencelengths 4
4
| targets_idx
sequencelengths 4
4
| cite_corpus_id_map
stringlengths 108
31.6k
|
---|---|---|---|---|
1801.06402-0 | <|paper_start|> Title: RAQ: Relationship-Aware Graph Querying in Large Networks
Abstract: RAQ: Relationship-Aware Graph Querying in Large Networks: The phenomenal growth of graph data from a wide variety of real-world applications has rendered graph querying to be a problem of paramount importance. Traditional techniques use structural as well as node similarities to find matches of a given query graph in a (large) target graph. However, almost all existing techniques have tacitly ignored the presence of relationships in graphs, which are usually encoded through interactions between node and edge labels. In this paper, we propose RAQ -- Relationship-Aware Graph Querying, to mitigate this gap. Given a query graph, RAQ identifies the $k$ best matching subgraphs of the target graph that encode similar relationships as in the query graph. To assess the utility of RAQ as a graph querying paradigm for knowledge discovery and exploration tasks, we perform a user survey on the Internet Movie Database (IMDb), where an overwhelming 86% of the 170 surveyed users preferred the relationship-aware match over traditional graph querying. The need to perform subgraph isomorphism renders RAQ NP-hard. The querying is made practical through beam stack search. Extensive experiments on multiple real-world graph datasets demonstrate RAQ to be effective, efficient, and scalable.
Introduction
\label{sec:intro}
Recent advances in scientific technologies generate a lot of data that are in
the form of graphs such as protein-protein interaction networks <|cite_start|> (Reference: Mining Discriminative Subgraphs from Global-state Networks: Global-state networks provide a powerful mechanism to model the increasing heterogeneity in data generated by current systems. Such a network comprises of a series of network snapshots with dynamic local states at nodes, and a global network state indicating the occurrence of an event. Mining discriminative subgraphs from global-state networks allows us to identify the influential sub-networks that have maximum impact on the global state and unearth the complex relationships between the local entities of a network and their collective behavior. In this paper, we explore this problem and design a technique called MINDS to mine minimally discriminative subgraphs from large global-state networks. To combat the exponential subgraph search space, we derive the concept of an edit map and perform Metropolis Hastings sampling on it to compute the answer set. Furthermore, we formulate the idea of network-constrained decision trees to learn prediction models that adhere to the underlying network structure. Extensive experiments on real datasets demonstrate excellent accuracy in terms of prediction quality. Additionally, MINDS achieves a speed-up of at least four orders of magnitude over baseline techniques.) <|cite_end|>,
social networks <|cite_start|> (Reference: Recommendations to boost content spread in social networks: Content sharing in social networks is a powerful mechanism for discovering content on the Internet. The degree to which content is disseminated within the network depends on the connectivity relationships among network nodes. Existing schemes for recommending connections in social networks are based on the number of common neighbors, similarity of user profiles, etc. However, such similarity-based connections do not consider the amount of content discovered. In this paper, we propose novel algorithms for recommending connections that boost content propagation in a social network without compromising on the relevance of the recommendations. Unlike existing work on influence propagation, in our environment, we are looking for edges instead of nodes, with a bound on the number of incident edges per node. We show that the content spread function is not submodular, and develop approximation algorithms for computing a near-optimal set of edges. Through experiments on real-world social graphs such as Flickr and Twitter, we show that our approximation algorithms achieve content spreads that are as much as 90 times higher compared to existing heuristics for recommending connections.) <|cite_end|>, and co-purchase networks <|cite_start|> (Reference: Defining and Evaluating Network Communities based on Ground-truth: Nodes in real-world networks organize into densely linked communities where edges appear with high concentration among the members of the community. Identifying such communities of nodes has proven to be a challenging task mainly due to a plethora of definitions of a community, intractability of algorithms, issues with evaluation and the lack of a reliable gold-standard ground-truth. In this paper we study a set of 230 large real-world social, collaboration and information networks where nodes explicitly state their group memberships. For example, in social networks nodes explicitly join various interest based social groups. We use such groups to define a reliable and robust notion of ground-truth communities. We then propose a methodology which allows us to compare and quantitatively evaluate how different structural definitions of network communities correspond to ground-truth communities. We choose 13 commonly used structural definitions of network communities and examine their sensitivity, robustness and performance in identifying the ground-truth. We show that the 13 structural definitions are heavily correlated and naturally group into four classes. We find that two of these definitions, Conductance and Triad-participation-ratio, consistently give the best performance in identifying ground-truth communities. We also investigate a task of detecting communities given a single seed node. We extend the local spectral clustering algorithm into a heuristic parameter-free community detection method that easily scales to networks with more than hundred million nodes. The proposed method achieves 30% relative improvement over current local clustering methods.) <|cite_end|>.
Moreover, owing to the advent of the semantic web <|cite_start|> (Reference: Semantic {Web: Semantic Web – расширение Web, в котором информации придают четкое значение, что улучшает доступ компьютеров и людей к информации и обеспечивает их совместную работу в тесном сотрудничестве. Используя технологии Semantic Web для разработки прикладных программ, можно обеспечить их более высокую функциональность и предоставить пользователям более качественные услуги.
Semantic Web inreach the Web by precise significance of information that improves the accessthe of computers and people to the information and provides their teamwork in close cooperation. By using technologies Semantic Web for development of applied programs, it is possible to provide their higher functionality and to give to users better services.) <|cite_end|>and RDF <|cite_start|> (Reference: Resource description framework (rdf): The Resource Description Framework (RDF) is the standard knowledge representation language for the Semantic Web, an evolution of the World Wide Web that aims to provide a well-founded infrastructure for publishing, sharing and querying structured data. This article provides an introduction to RDF and its related vocabulary definition language RDF Schema, and explains its relationship with the OWL Web Ontology Language. Finally, it provides an overview of the historical development of RDF and related languages for Web metadata.) <|cite_end|>as the preferred choice for data interchange through the web,
graph-based searching and querying are fast becoming a popular and cardinal way
of web searching. For example, Facebook's Graph Search <|cite_start|> (Reference: Facebook graph search: Το χαρακτηριστικό του Facebook Graph Search, eίναι πeρίπου όπως τη μηχανή αναζήτησης της Google, ...) <|cite_end|>and
Google's Knowledge Graph <|cite_start|> (Reference: Reconstructing Custom Fragments of Google Knowledge Graph on the Fly: . Google Knowledge Graph is more complicated than public knowledge graphs when it comes to retrieving and reusing data for e.g. building custom KGs or concept maps. This is because even though there is a dedicated graph search API, it only offers information about individual entities without links to other relevant resources that are only available in Google search knowledge panels. In this paper, we present Knowledge Graph Viewer, a tool that utilizes both the graph search API and knowledge panels in order to obtain relationships between graph entities to reconstruct custom substructures of the large knowledge base. This demo will showcase its usage and functionalities in each step of the KG creation while explaining the concept behind the workflow.) <|cite_end|>require searching mechanisms that work
with graph-based data rather than the traditional text-based queries. In such
graphs, nodes denote entities and edges denote the associations/associations between
entities. In most cases, the nodes are tagged with additional meta information
to characterize the corresponding entities <|cite_start|> (Reference: Answering top-k representative queries on graph databases: Given a function that classifies a data object as relevant or irrelevant, we consider the task of selecting k objects that best represent all relevant objects in the underlying database. This problem occurs naturally when analysts want to familiarize themselves with the relevant objects in a database using a small set of k exemplars. In this paper, we solve the problem of top-k representative queries on graph databases. While graph databases model a wide range of scientific data, solving the problem in the context of graphs presents us with unique challenges due to the inherent complexity of matching structures. Furthermore, top-k representative queries map to the classic Set Cover problem, making it NP-hard. To overcome these challenges, we develop a greedy approximation with theoretical guarantees on the quality of the answer set, noting that a better approximation is not feasible in polynomial time. To further optimize the quadratic computational cost of the greedy algorithm, we propose an index structure called NB-Index to index the \theta-neighborhoods of the database graphs by employing a novel combination of Lipschitz embedding and agglomerative clustering. Extensive experiments on real graph datasets validate the efficiency and effectiveness of the proposed techniques that achieve up to two orders of magnitude speed-up over state-of-the-art algorithms.) <|cite_end|>. The ability
to query these graph datasets efficiently is a fundamental necessity for a wide
array of applications <|cite_start|> (Reference: NeMa: Fast graph search with label similarity: It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques.) <|cite_end|> <|cite_start|> (Reference: Neighborhood Based Fast Graph Search in Large Networks: Complex social and information network search becomes important with a variety of applications. In the core of these applications, lies a common and critical problem: Given a labeled network and a query graph, how to efficiently search the query graph in the target network. The presence of noise and the incomplete knowledge about the structure and content of the target network make it unrealistic to find an exact match. Rather, it is more appealing to find the top-k approximate matches.
In this paper, we propose a neighborhood-based similarity measure that could avoid costly graph isomorphism and edit distance computation. Under this new measure, we prove that subgraph similarity search is NP hard, while graph similarity match is polynomial. By studying the principles behind this measure, we found an information propagation model that is able to convert a large network into a set of multidimensional vectors, where sophisticated indexing and similarity search algorithms are available. The proposed method, called Ness (Neighborhood Based Similarity Search), is appropriate for graphs with low automorphism and high noise, which are common in many social and information networks. Ness is not only efficient, but also robust against structural noise and information loss. Empirical results show that it can quickly and accurately find high-quality matches in large networks, with negligible cost.) <|cite_end|> <|cite_start|> (Reference: Answering top-k representative queries on graph databases: Given a function that classifies a data object as relevant or irrelevant, we consider the task of selecting k objects that best represent all relevant objects in the underlying database. This problem occurs naturally when analysts want to familiarize themselves with the relevant objects in a database using a small set of k exemplars. In this paper, we solve the problem of top-k representative queries on graph databases. While graph databases model a wide range of scientific data, solving the problem in the context of graphs presents us with unique challenges due to the inherent complexity of matching structures. Furthermore, top-k representative queries map to the classic Set Cover problem, making it NP-hard. To overcome these challenges, we develop a greedy approximation with theoretical guarantees on the quality of the answer set, noting that a better approximation is not feasible in polynomial time. To further optimize the quadratic computational cost of the greedy algorithm, we propose an index structure called NB-Index to index the \theta-neighborhoods of the database graphs by employing a novel combination of Lipschitz embedding and agglomerative clustering. Extensive experiments on real graph datasets validate the efficiency and effectiveness of the proposed techniques that achieve up to two orders of magnitude speed-up over state-of-the-art algorithms.) <|cite_end|> <|cite_start|> (Reference: Closure-Tree: An Index Structure for Graph Queries: Graphs have become popular for modeling structured data. As a result, graph queries are becoming common and graph indexing has come to play an essential role in query processing. We introduce the concept of a graph closure, a generalized graph that represents a number of graphs. Our indexing technique, called Closure-tree, organizes graphs hierarchically where each node summarizes its descendants by a graph closure. Closure-tree can efficiently support both subgraph queries and similarity queries. Subgraph queries find graphs that contain a specific subgraph, whereas similarity queries find graphs that are similar to a query graph. For subgraph queries, we propose a technique called pseudo subgraph isomorphism which approximates subgraph isomorphism with high accuracy. For similarity queries, we measure graph similarity through edit distance using heuristic graph mapping methods. We implement two kinds of similarity queries: K-NN query and range query. Our experiments on chemical compounds and synthetic graphs show that for subgraph queries, Closuretree outperforms existing techniques by up to two orders of magnitude in terms of candidate answer set size and index size. For similarity queries, our experiments validate the quality and efficiency of the presented algorithms.) <|cite_end|>.
One of the most common frameworks in graph querying is to find similar
embeddings of a query graph $q$ in a much larger target graph $G$. More
formally, for a query graph $q$ and a distance function $d(q,g)$ between two
graphs $q$ and $g$, the goal is to identify the $k$ most similar subgraphs
$g_1,\cdots,g_k \subset G$ of the target graph $G$. The notion of ``similar''
graphs depends on the efficacy of the distance function $d(q,g)$. Typical
distance (or similarity) functions for graphs range from maximal common subgraphs and graph
edit distance <|cite_start|> (Reference: Closure-Tree: An Index Structure for Graph Queries: Graphs have become popular for modeling structured data. As a result, graph queries are becoming common and graph indexing has come to play an essential role in query processing. We introduce the concept of a graph closure, a generalized graph that represents a number of graphs. Our indexing technique, called Closure-tree, organizes graphs hierarchically where each node summarizes its descendants by a graph closure. Closure-tree can efficiently support both subgraph queries and similarity queries. Subgraph queries find graphs that contain a specific subgraph, whereas similarity queries find graphs that are similar to a query graph. For subgraph queries, we propose a technique called pseudo subgraph isomorphism which approximates subgraph isomorphism with high accuracy. For similarity queries, we measure graph similarity through edit distance using heuristic graph mapping methods. We implement two kinds of similarity queries: K-NN query and range query. Our experiments on chemical compounds and synthetic graphs show that for subgraph queries, Closuretree outperforms existing techniques by up to two orders of magnitude in terms of candidate answer set size and index size. For similarity queries, our experiments validate the quality and efficiency of the presented algorithms.) <|cite_end|>to more recent developments in fuzzy graph matching <|cite_start|> (Reference: NeMa: Fast graph search with label similarity: It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques.) <|cite_end|> <|cite_start|> (Reference: Neighborhood Based Fast Graph Search in Large Networks: Complex social and information network search becomes important with a variety of applications. In the core of these applications, lies a common and critical problem: Given a labeled network and a query graph, how to efficiently search the query graph in the target network. The presence of noise and the incomplete knowledge about the structure and content of the target network make it unrealistic to find an exact match. Rather, it is more appealing to find the top-k approximate matches.
In this paper, we propose a neighborhood-based similarity measure that could avoid costly graph isomorphism and edit distance computation. Under this new measure, we prove that subgraph similarity search is NP hard, while graph similarity match is polynomial. By studying the principles behind this measure, we found an information propagation model that is able to convert a large network into a set of multidimensional vectors, where sophisticated indexing and similarity search algorithms are available. The proposed method, called Ness (Neighborhood Based Similarity Search), is appropriate for graphs with low automorphism and high noise, which are common in many social and information networks. Ness is not only efficient, but also robust against structural noise and information loss. Empirical results show that it can quickly and accurately find high-quality matches in large networks, with negligible cost.) <|cite_end|>. While these distance functions extend the state-of-the-art
in graph querying, they lack the ability to adapt the distance function based
on the \emph{context} expressed in the query.
\ignore{
\textbf{1. Querying based on associations: } Traditional similarity functions consider two graphs as similar if they are structurally similar and they contain similar nodes <|cite_start|> (Reference: NeMa: Fast graph search with label similarity: It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques.) <|cite_end|> <|cite_start|> (Reference: Neighborhood Based Fast Graph Search in Large Networks: Complex social and information network search becomes important with a variety of applications. In the core of these applications, lies a common and critical problem: Given a labeled network and a query graph, how to efficiently search the query graph in the target network. The presence of noise and the incomplete knowledge about the structure and content of the target network make it unrealistic to find an exact match. Rather, it is more appealing to find the top-k approximate matches.
In this paper, we propose a neighborhood-based similarity measure that could avoid costly graph isomorphism and edit distance computation. Under this new measure, we prove that subgraph similarity search is NP hard, while graph similarity match is polynomial. By studying the principles behind this measure, we found an information propagation model that is able to convert a large network into a set of multidimensional vectors, where sophisticated indexing and similarity search algorithms are available. The proposed method, called Ness (Neighborhood Based Similarity Search), is appropriate for graphs with low automorphism and high noise, which are common in many social and information networks. Ness is not only efficient, but also robust against structural noise and information loss. Empirical results show that it can quickly and accurately find high-quality matches in large networks, with negligible cost.) <|cite_end|> <|cite_start|> (Reference: DELTACON: A Principled Massive-Graph Similarity Function: How much did a network change since yesterday? How different is the wiring between Bob's brain (a left-handed male) and Alice's brain (a right-handed female)? Graph similarity with known node correspondence, i.e. the detection of changes in the connectivity of graphs, arises in numerous settings. In this work, we formally state the axioms and desired properties of the graph similarity functions, and evaluate when state-of-the-art methods fail to detect crucial connectivity changes in graphs. We propose DeltaCon, a principled, intuitive, and scalable algorithm that assesses the similarity between two graphs on the same nodes (e.g. employees of a company, customers of a mobile carrier). Experiments on various synthetic and real graphs showcase the advantages of our method over existing similarity measures. Finally, we employ DeltaCon to real applications: (a) we classify people to groups of high and low creativity based on their brain connectivity graphs, and (b) do temporal anomaly detection in the who-emails-whom Enron graph.) <|cite_end|> <|cite_start|> (Reference: Graph Similarity Search with Edit Distance Constraint in Large Graph Databases: Due to many real applications of graph databases, it has become increasingly important to retrieve graphs g (in graph database D) that approximately match with query graph q, rather than exact subgraph matches. In this paper, we study the problem of graph similarity search, which retrieves graphs that are similar to a given query graph under the constraint of the minimum edit distance. Specifically, we derive a lower bound, branch-based bound, which can greatly reduce the search space of the graph similarity search. We also propose a tree index structure, namely b-tree, to facilitate effective pruning and efficient query processing. Extensive experiments confirm that our proposed approach outperforms the existing approaches by orders of magnitude, in terms of both pruning power and query response time.) <|cite_end|>. Two nodes are similar if they are represented by similar feature vectors (the most common form is just a node label). This definition of similarity function is completely oblivious to the \emph{associations} encoded in the graphs.
To illustrate the importance of associations, let us consider Fig.~\ref{fig:motivation}. Fig.~\ref{fig:query} depicts two graphs where each node is characterized by a feature vectors. a well known problem of gene-module discovery in gene-interaction networks. In a gene interaction network, each node corresponds to a gene and two genes are connected if they co-regulate a biological process. Biologists are often interested in identifying gene modules (subgraphs) that display similar reactions to diseases, or external perturbations. The reaction of a gene (node) to an external perturbation is captured through the gene expression level.
}
\begin{figure*}[!ht]
\centering
\includegraphics[width=\textwidth]{pdf/motivation.pdf}
\figcaption{\textbf{Illustration of context-aware graph querying
(Ex.~\ref{example:cgq}). Here, ML, AI, DM and DB stand for machine learning,
artificial intelligence, data mining, and databases respectively.}}
\label{fig:motivation}
\end{figure*}
\ignore{
The above example highlights two key requirements in context-aware graph querying.
First, the context changes with each query and accordingly the distance function
needs to adapt. More specifically, we need to learn the features that are
important given the context expressed in the query and the distance function should weight
features accordingly while computing node similarities. Second, graph querying
always poses a computation challenge due to the inherent complexity of matching
structures. Consequently, without the support of an index structure, answering
queries is not scalable. While a number of index structures exist for graph
queries <|cite_start|> (Reference: Closure-Tree: An Index Structure for Graph Queries: Graphs have become popular for modeling structured data. As a result, graph queries are becoming common and graph indexing has come to play an essential role in query processing. We introduce the concept of a graph closure, a generalized graph that represents a number of graphs. Our indexing technique, called Closure-tree, organizes graphs hierarchically where each node summarizes its descendants by a graph closure. Closure-tree can efficiently support both subgraph queries and similarity queries. Subgraph queries find graphs that contain a specific subgraph, whereas similarity queries find graphs that are similar to a query graph. For subgraph queries, we propose a technique called pseudo subgraph isomorphism which approximates subgraph isomorphism with high accuracy. For similarity queries, we measure graph similarity through edit distance using heuristic graph mapping methods. We implement two kinds of similarity queries: K-NN query and range query. Our experiments on chemical compounds and synthetic graphs show that for subgraph queries, Closuretree outperforms existing techniques by up to two orders of magnitude in terms of candidate answer set size and index size. For similarity queries, our experiments validate the quality and efficiency of the presented algorithms.) <|cite_end|> <|cite_start|> (Reference: Answering top-k representative queries on graph databases: Given a function that classifies a data object as relevant or irrelevant, we consider the task of selecting k objects that best represent all relevant objects in the underlying database. This problem occurs naturally when analysts want to familiarize themselves with the relevant objects in a database using a small set of k exemplars. In this paper, we solve the problem of top-k representative queries on graph databases. While graph databases model a wide range of scientific data, solving the problem in the context of graphs presents us with unique challenges due to the inherent complexity of matching structures. Furthermore, top-k representative queries map to the classic Set Cover problem, making it NP-hard. To overcome these challenges, we develop a greedy approximation with theoretical guarantees on the quality of the answer set, noting that a better approximation is not feasible in polynomial time. To further optimize the quadratic computational cost of the greedy algorithm, we propose an index structure called NB-Index to index the \theta-neighborhoods of the database graphs by employing a novel combination of Lipschitz embedding and agglomerative clustering. Extensive experiments on real graph datasets validate the efficiency and effectiveness of the proposed techniques that achieve up to two orders of magnitude speed-up over state-of-the-art algorithms.) <|cite_end|> <|cite_start|> (Reference: NeMa: Fast graph search with label similarity: It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques.) <|cite_end|>, none of them adapt to dynamic distance
functions. As a result, we need to design a flexible index structure for
answering contextual graph queries.
Fig.~\ref{fig:pipeline} present the pipeline of our technique that allows us to
overcome the two challenges mentioned above.
}
\begin{example}
\label{example:cgq}
Consider the query graph $q_1$ shown in Fig.~\ref{fig:motivation}. It
describes a collaboration pattern among three authors of similar repute
(based on $H$-index) from the same university (Stanford). Therefore, a good
distance function should identify other collaboration patterns among people
from the same organization and with similar $H$-index values.
The graph $t_1$ shows a collaboration pattern that is considered a ``good''
match using traditional distance functions. Structurally, $t_1$ is
identical to $g_1$. The nodes in $t_1$ also match well with those in $q_1$.
For example, Alon Halevy matches well with Jeffrey Ullman since they both
work on databases and have similar $H$-indices. A similar association exists
between Michael Jordon and Daphne Koller. The graduate student also matches
well with Sebastian Thrun since they both are from Stanford and work on AI.
However, despite this good correspondence in structure and node
descriptions, the entire context of a collaboration pattern among
researchers from the same organization of similar repute is lost.
To see why, contrast the context-aware match $g_1$. Notice that,
individually, none of the nodes in $g_1$ match well with the nodes in
$q_1$. However, $g_1$ also represents a collaboration pattern among people
of similar $H$-indices from the same organization. Similarly, $g_1'$ provides
another collaboration pattern among researchers from the same organization.
However, in $g_1'$, the context of collaboration among people of similar
$H$-indices is not preserved as well as in $g_1$. Thus, $g_1$ is considered a
better context-aware match than $g_1'$.
To provide one more example of a context-aware match, $q_2$ is a
collaboration pattern among eminent researchers from the data mining area.
Unlike $q_1$, the research area and $H$-index provide the context here and,
therefore, $g_2$ is a good match since it is a collaboration among database
researchers with similar $H$-indices.
In sum, while each node/edge feature contributes equally to the similarity
in traditional measures, the proposed context-aware measure \emph{learns}
their respective importance in computing matches that are more relevant to
the query.
\end{example}
\ignore{
The applications of contextual graph queries is not limited to just co-authorship
networks. Consider protein-protein interaction networks (PPI) where each node
is a protein and two proteins are connected if they jointly regulate a certain
biological process. It is common to annotate proteins with additional
information such as Gene Ontology (GO) tags, the diseases that they
impact <|cite_start|> (Reference: Mining Discriminative Subgraphs from Global-state Networks: Global-state networks provide a powerful mechanism to model the increasing heterogeneity in data generated by current systems. Such a network comprises of a series of network snapshots with dynamic local states at nodes, and a global network state indicating the occurrence of an event. Mining discriminative subgraphs from global-state networks allows us to identify the influential sub-networks that have maximum impact on the global state and unearth the complex relationships between the local entities of a network and their collective behavior. In this paper, we explore this problem and design a technique called MINDS to mine minimally discriminative subgraphs from large global-state networks. To combat the exponential subgraph search space, we derive the concept of an edit map and perform Metropolis Hastings sampling on it to compute the answer set. Furthermore, we formulate the idea of network-constrained decision trees to learn prediction models that adhere to the underlying network structure. Extensive experiments on real datasets demonstrate excellent accuracy in terms of prediction quality. Additionally, MINDS achieves a speed-up of at least four orders of magnitude over baseline techniques.) <|cite_end|>, etc. A functional module in a PPI is a connected subgraph responsible
for proper functioning of a biological process. The expression levels in such
functional modules are monitored and studied to understand the onset of
diseases <|cite_start|> (Reference: Mining Discriminative Subgraphs from Global-state Networks: Global-state networks provide a powerful mechanism to model the increasing heterogeneity in data generated by current systems. Such a network comprises of a series of network snapshots with dynamic local states at nodes, and a global network state indicating the occurrence of an event. Mining discriminative subgraphs from global-state networks allows us to identify the influential sub-networks that have maximum impact on the global state and unearth the complex relationships between the local entities of a network and their collective behavior. In this paper, we explore this problem and design a technique called MINDS to mine minimally discriminative subgraphs from large global-state networks. To combat the exponential subgraph search space, we derive the concept of an edit map and perform Metropolis Hastings sampling on it to compute the answer set. Furthermore, we formulate the idea of network-constrained decision trees to learn prediction models that adhere to the underlying network structure. Extensive experiments on real datasets demonstrate excellent accuracy in terms of prediction quality. Additionally, MINDS achieves a speed-up of at least four orders of magnitude over baseline techniques.) <|cite_end|>. Now, consider a domain scientist who knows one functional
module that is responsible for Cancer.
}
\noindent
\textbf{Challenges:}
As highlighted by Ex.~\ref{example:cgq}, designing effective techniques for
context-aware graph querying is non-trivial and possesses the following key
challenges:
\begin{itemize}
\item \textbf{Quantifying context:} It may appear that a common or frequent
feature value specifies the context. For example, in $q_1$ of
Fig.~\ref{fig:motivation}, we may assume that since all authors are
from Stanford, the user intends to identify other collaborations from
authors belonging to the same organization. However, such an
assumption do not mirror reality. To illustrate, consider the scenario
where authors are also tagged with their country. Under this setting,
if a query contains authors from USA, it cannot be concluded with
confidence that the user intends to find collaborations among authors
from the same country. USA is one of the most active countries in
computer science research and contributes large volumes of paper every
year. Thus, the event of all three authors belonging to USA could occur
by chance and not necessarily a context expressed by the user.
Therefore, an important question arises: \emph{How do we quantify
context from the various features characterizing a node?}
\item \textbf{Context is dynamic:} The context \emph{changes} with each
query. For example, in $q_1$, $H$-index and organization provide the
context, whereas in $q_2$, $H$-index and area of research defines the
context. Consequently, the distance function needs to adapt based on
the query. Particularly, we need to learn the features that provide
the context in the query and the distance function should appropriately
weigh those features more while computing the node similarities.
\item \textbf{Similar matches may have dissimilar nodes:} As outlined in
Ex.~\ref{example:cgq}, similarity between individual nodes do not play
a role. Rather, the goal is to identify subgraphs that encode similar
associations such as authors from the same organization or similar
$H$-indices. Technically, since edges encode associations between nodes,
we need to identify subgraphs that have similar edges.
\item \textbf{Exponential search space:} Graph querying always poses a
computation challenge due to the inherent complexity of matching
structures. In addition, the number of subgraphs is exponential with
respect to the size of the target graph. Consequently, without the
support of an index structure, answering the proposed context-aware
queries is not feasible. While a number of index structures exist for
graph queries <|cite_start|> (Reference: Closure-Tree: An Index Structure for Graph Queries: Graphs have become popular for modeling structured data. As a result, graph queries are becoming common and graph indexing has come to play an essential role in query processing. We introduce the concept of a graph closure, a generalized graph that represents a number of graphs. Our indexing technique, called Closure-tree, organizes graphs hierarchically where each node summarizes its descendants by a graph closure. Closure-tree can efficiently support both subgraph queries and similarity queries. Subgraph queries find graphs that contain a specific subgraph, whereas similarity queries find graphs that are similar to a query graph. For subgraph queries, we propose a technique called pseudo subgraph isomorphism which approximates subgraph isomorphism with high accuracy. For similarity queries, we measure graph similarity through edit distance using heuristic graph mapping methods. We implement two kinds of similarity queries: K-NN query and range query. Our experiments on chemical compounds and synthetic graphs show that for subgraph queries, Closuretree outperforms existing techniques by up to two orders of magnitude in terms of candidate answer set size and index size. For similarity queries, our experiments validate the quality and efficiency of the presented algorithms.) <|cite_end|> <|cite_start|> (Reference: Answering top-k representative queries on graph databases: Given a function that classifies a data object as relevant or irrelevant, we consider the task of selecting k objects that best represent all relevant objects in the underlying database. This problem occurs naturally when analysts want to familiarize themselves with the relevant objects in a database using a small set of k exemplars. In this paper, we solve the problem of top-k representative queries on graph databases. While graph databases model a wide range of scientific data, solving the problem in the context of graphs presents us with unique challenges due to the inherent complexity of matching structures. Furthermore, top-k representative queries map to the classic Set Cover problem, making it NP-hard. To overcome these challenges, we develop a greedy approximation with theoretical guarantees on the quality of the answer set, noting that a better approximation is not feasible in polynomial time. To further optimize the quadratic computational cost of the greedy algorithm, we propose an index structure called NB-Index to index the \theta-neighborhoods of the database graphs by employing a novel combination of Lipschitz embedding and agglomerative clustering. Extensive experiments on real graph datasets validate the efficiency and effectiveness of the proposed techniques that achieve up to two orders of magnitude speed-up over state-of-the-art algorithms.) <|cite_end|> <|cite_start|> (Reference: NeMa: Fast graph search with label similarity: It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques.) <|cite_end|>, none of them adapt to
dynamic distance functions.
\end{itemize}
\noindent
\textbf{Contributions:}
Owing to the challenges outlined above, we need to design an \emph{adaptive
similarity function} and a \emph{flexible index structure} for answering
contextual graph queries. The key contributions of our work are as follows:
\begin{itemize}
\item We introduce the notion of \emph{context} in the field of graph
search, which although popularly used for querying web data, is the
first-ever effort for querying graph data. Our novel (sub)-graph
searching paradigm facilitates learning of the context prevalent in the
query graph, which is further used to devise a context-aware similarity
function (Sec.~\ref{sec:problem}).
\item Under the proposed searching paradigm, we propose a novel problem of
retrieving the top-$k$ most \emph{contextually similar subgraph(s)} of
a query graph (Sec.~\ref{subsec:problem}).
\item We propose a \emph{flexible} CGQ index structure that possesses the
capability to adapt to dynamic distance functions along with an
\emph{efficient} CGQ querying algorithm (Sec.~\ref{sec:algo}).
\item Through an in-depth empirical analysis on real datasets, we show the
superiority of the proposed similarity metric using both qualitative
and quantitative results. We also show that the proposed CGQ index and
its associated querying algorithm is \emph{scalable}: we achieve a
speed-up of up to three orders of magnitude over a baseline strategy
with a small memory footprint (Sec.~\ref{sec:exp}). The excutables and datasets are available at http://bit.ly/2fishkH.
\end{itemize}
\begin{table}
\centering
\small
\scalebox{0.8}{
\begin{tabular}{ll}\hline
\textbf{Item} & \textbf{Definition}\\%\hline
\hline
$g_1 \subseteq g_2$ & $g_1$ is subgraph isomorphic to $g_2$.\\%\hline
$f(v)=[f_1(v),\ldots,f_d(v)]$ & $d$-dimensional feature vector for node $v$\\%\hline
$\mathcal{N}(f_i,e)$ & Neighborhood vector for feature $f_i$ around edge $e$\\%\hline
$w(e)=[w(f_1,e),\ldots,w(f_d,e)]$ & Weight vector of edge $e$\\%\hline
$s(e)=[s(f_1,e),\ldots,s(f_d,e)]$ & Context vector corresponding to edge $e$\\%\hline
$cs(e,e')$ & Contextual similarity between edges $e$ and $e'$\\%\hline
$\phi: V_q \rightarrow V$ & (Sub-)graph injection function\\%\hline
$CGS_{\phi}(q,G)$ & Contextual similarity between $q$ and $G$\\%\hline
\hline
\end{tabular}
}
\tabcaption{\textbf{Summary of the notations used.}}
\label{tab:terminology}
\moveups
\end{table}
Related Work
The general problem of graph querying has been studied extensively over the past decade.
Here, we overview the existing works that overlap with our problem.
\\
\textbf{Traditional Similarity Measures:} Research work done in both exact and approximate (sub-)graph querying have employed the use of a plethora of similarity measures. The most prominent of them being graph edit distance <|cite_start|> (Reference: Graph Similarity Search with Edit Distance Constraint in Large Graph Databases: Due to many real applications of graph databases, it has become increasingly important to retrieve graphs g (in graph database D) that approximately match with query graph q, rather than exact subgraph matches. In this paper, we study the problem of graph similarity search, which retrieves graphs that are similar to a given query graph under the constraint of the minimum edit distance. Specifically, we derive a lower bound, branch-based bound, which can greatly reduce the search space of the graph similarity search. We also propose a tree index structure, namely b-tree, to facilitate effective pruning and efficient query processing. Extensive experiments confirm that our proposed approach outperforms the existing approaches by orders of magnitude, in terms of both pruning power and query response time.) <|cite_end|> <|cite_start|> (Reference: Comparing stars: On approximating graph edit distance: Graph data have become ubiquitous and manipulating them based on similarity is essential for many applications. Graph edit distance is one of the most widely accepted measures to determine similarities between graphs and has extensive applications in the fields of pattern recognition, computer vision etc. Unfortunately, the problem of graph edit distance computation is NP-Hard in general. Accordingly, in this paper we introduce three novel methods to compute the upper and lower bounds for the edit distance between two graphs in polynomial time. Applying these methods, two algorithms AppFull and AppSub are introduced to perform different kinds of graph search on graph databases. Comprehensive experimental studies are conducted on both real and synthetic datasets to examine various aspects of the methods for bounding graph edit distance. Result shows that these methods achieve good scalability in terms of both the number of graphs and the size of graphs. The effectiveness of these algorithms also confirms the usefulness of using our bounds in filtering and searching of graphs.) <|cite_end|>, maximum and minimum common subgraph <|cite_start|> (Reference: A graph distance metric based on the maximal common subgraph: ) <|cite_end|> <|cite_start|> (Reference: A graph distance metric combining maximum common subgraph and minimum common supergraph: ) <|cite_end|>, edge misses <|cite_start|> (Reference: TALE: A tool for approximate large graph matching: Large graph datasets are common in many emerging database applications, and most notably in large-scale scientific applications. To fully exploit the wealth of information encoded in graphs, effective and efficient graph matching tools are critical. Due to the noisy and incomplete nature of real graph datasets, approximate, rather than exact, graph matching is required. Furthermore, many modern applications need to query large graphs, each of which has hundreds to thousands of nodes and edges. This paper presents a novel technique for approximate matching of large graph queries. We propose a novel indexing method that incorporates graph structural information in a hybrid index structure. This indexing technique achieves high pruning power and the index size scales linearly with the database size. In addition, we propose an innovative matching paradigm to query large graphs. This technique distinguishes nodes by their importance in the graph structure. The matching algorithm first matches the important nodes of a query and then progressively extends these matches. Through experiments on several real datasets, this paper demonstrates the effectiveness and efficiency of the proposed method.) <|cite_end|>, structural similarity <|cite_start|> (Reference: Fast Best-Effort Pattern Matching in Large Attributed Graphs: We focus on large graphs where nodes have attributes, such as a social network where the nodes are labelled with each person's job title. In such a setting, we want to find subgraphs that match a user query pattern. For example, a "star" query would be, "find a CEO who has strong interactions with a Manager, a Lawyer,and an Accountant, or another structure as close to that as possible". Similarly, a "loop" query could help spot a money laundering ring.
Traditional SQL-based methods, as well as more recent graph indexing methods, will return no answer when an exact match does not exist. This is the first main feature of our method. It can find exact-, as well as near-matches, and it will present them to the user in our proposed "goodness" order. For example, our method tolerates indirect paths between, say, the "CEO" and the "Accountant" of the above sample query, when direct paths don't exist. Its second feature is scalability. In general, if the query has nq nodes and the data graph has n nodes, the problem needs polynomial time complexity O(n n q), which is prohibitive. Our G-Ray ("Graph X-Ray") method finds high-quality subgraphs in time linear on the size of the data graph.
Experimental results on the DLBP author-publication graph (with 356K nodes and 1.9M edges) illustrate both the effectiveness and scalability of our approach. The results agree with our intuition, and the speed is excellent. It takes 4 seconds on average fora 4-node query on the DBLP graph.) <|cite_end|> <|cite_start|> (Reference: SAGA: A Subgraph Matching Tool for Biological Graphs: MOTIVATION
With the rapid increase in the availability of biological graph datasets, there is a growing need for effective and efficient graph querying methods. Due to the noisy and incomplete characteristics of these datasets, exact graph matching methods have limited use and approximate graph matching methods are required. Unfortunately, existing graph matching methods are too restrictive as they only allow exact or near exact graph matching. This paper presents a novel approximate graph matching technique called SAGA. This technique employs a flexible model for computing graph similarity, which allows for node gaps, node mismatches and graph structural differences. SAGA employs an indexing technique that allows it to efficiently evaluate queries even against large graph datasets.
RESULTS
SAGA has been used to query biological pathways and literature datasets, which has revealed interesting similarities between distinct pathways that cannot be found by existing methods. These matches associate seemingly unrelated biological processes, connect studies in different sub-areas of biomedical research and thus pose hypotheses for new discoveries. SAGA is also orders of magnitude faster than existing methods.
AVAILABILITY
SAGA can be accessed freely via the web at http://www.eecs.umich.edu/saga. Binaries are also freely available at this website.) <|cite_end|> <|cite_start|> (Reference: Global Alignment of Multiple Protein Interaction Networks with Application to Functional Orthology Detection: Protein–protein interactions (PPIs) and their networks play a central role in all biological processes. Akin to the complete sequencing of genomes and their comparative analysis, complete descriptions of interactomes and their comparative analysis is fundamental to a deeper understanding of biological processes. A first step in such an analysis is to align two or more PPI networks. Here, we introduce an algorithm, IsoRank, for global alignment of multiple PPI networks. The guiding intuition here is that a protein in one PPI network is a good match for a protein in another network if their respective sequences and neighborhood topologies are a good match. We encode this intuition as an eigenvalue problem in a manner analogous to Google's PageRank method. Using IsoRank, we compute a global alignment of the Saccharomyces cerevisiae, Drosophila melanogaster, Caenorhabditis elegans, Mus musculus, and Homo sapiens PPI networks. We demonstrate that incorporating PPI data in ortholog prediction results in improvements over existing sequence-only approaches and over predictions from local alignments of the yeast and fly networks. Previous methods have been effective at identifying conserved, localized network patterns across pairs of networks. This work takes the further step of performing a global alignment of multiple PPI networks. It simultaneously uses sequence similarity and network data and, unlike previous approaches, explicitly models the tradeoff inherent in combining them. We expect IsoRank—with its simultaneous handling of node similarity and network similarity—to be applicable across many scientific domains.) <|cite_end|>, node-label mismatches <|cite_start|> (Reference: SAGA: A Subgraph Matching Tool for Biological Graphs: MOTIVATION
With the rapid increase in the availability of biological graph datasets, there is a growing need for effective and efficient graph querying methods. Due to the noisy and incomplete characteristics of these datasets, exact graph matching methods have limited use and approximate graph matching methods are required. Unfortunately, existing graph matching methods are too restrictive as they only allow exact or near exact graph matching. This paper presents a novel approximate graph matching technique called SAGA. This technique employs a flexible model for computing graph similarity, which allows for node gaps, node mismatches and graph structural differences. SAGA employs an indexing technique that allows it to efficiently evaluate queries even against large graph datasets.
RESULTS
SAGA has been used to query biological pathways and literature datasets, which has revealed interesting similarities between distinct pathways that cannot be found by existing methods. These matches associate seemingly unrelated biological processes, connect studies in different sub-areas of biomedical research and thus pose hypotheses for new discoveries. SAGA is also orders of magnitude faster than existing methods.
AVAILABILITY
SAGA can be accessed freely via the web at http://www.eecs.umich.edu/saga. Binaries are also freely available at this website.) <|cite_end|> <|cite_start|> (Reference: Global Alignment of Multiple Protein Interaction Networks with Application to Functional Orthology Detection: Protein–protein interactions (PPIs) and their networks play a central role in all biological processes. Akin to the complete sequencing of genomes and their comparative analysis, complete descriptions of interactomes and their comparative analysis is fundamental to a deeper understanding of biological processes. A first step in such an analysis is to align two or more PPI networks. Here, we introduce an algorithm, IsoRank, for global alignment of multiple PPI networks. The guiding intuition here is that a protein in one PPI network is a good match for a protein in another network if their respective sequences and neighborhood topologies are a good match. We encode this intuition as an eigenvalue problem in a manner analogous to Google's PageRank method. Using IsoRank, we compute a global alignment of the Saccharomyces cerevisiae, Drosophila melanogaster, Caenorhabditis elegans, Mus musculus, and Homo sapiens PPI networks. We demonstrate that incorporating PPI data in ortholog prediction results in improvements over existing sequence-only approaches and over predictions from local alignments of the yeast and fly networks. Previous methods have been effective at identifying conserved, localized network patterns across pairs of networks. This work takes the further step of performing a global alignment of multiple PPI networks. It simultaneously uses sequence similarity and network data and, unlike previous approaches, explicitly models the tradeoff inherent in combining them. We expect IsoRank—with its simultaneous handling of node similarity and network similarity—to be applicable across many scientific domains.) <|cite_end|>. However, all of these distance methods operate oblivious to the presence of \emph{context in the query graph}, thereby ignoring its impact on the eventual query-target graph similarity computation.
\\
\textbf{Information Propagation based Similarity Measures:} Information propagation is a popular approach for capturing the interactions represented in the $h$-hop neighborhoodneighborhood of each node. This concept significantly increases the expressive power of the similarity functions so designed, and has been used by NESS <|cite_start|> (Reference: Neighborhood Based Fast Graph Search in Large Networks: Complex social and information network search becomes important with a variety of applications. In the core of these applications, lies a common and critical problem: Given a labeled network and a query graph, how to efficiently search the query graph in the target network. The presence of noise and the incomplete knowledge about the structure and content of the target network make it unrealistic to find an exact match. Rather, it is more appealing to find the top-k approximate matches.
In this paper, we propose a neighborhood-based similarity measure that could avoid costly graph isomorphism and edit distance computation. Under this new measure, we prove that subgraph similarity search is NP hard, while graph similarity match is polynomial. By studying the principles behind this measure, we found an information propagation model that is able to convert a large network into a set of multidimensional vectors, where sophisticated indexing and similarity search algorithms are available. The proposed method, called Ness (Neighborhood Based Similarity Search), is appropriate for graphs with low automorphism and high noise, which are common in many social and information networks. Ness is not only efficient, but also robust against structural noise and information loss. Empirical results show that it can quickly and accurately find high-quality matches in large networks, with negligible cost.) <|cite_end|>, NeMa <|cite_start|> (Reference: NeMa: Fast graph search with label similarity: It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques.) <|cite_end|>and DeltaCon <|cite_start|> (Reference: DELTACON: A Principled Massive-Graph Similarity Function: How much did a network change since yesterday? How different is the wiring between Bob's brain (a left-handed male) and Alice's brain (a right-handed female)? Graph similarity with known node correspondence, i.e. the detection of changes in the connectivity of graphs, arises in numerous settings. In this work, we formally state the axioms and desired properties of the graph similarity functions, and evaluate when state-of-the-art methods fail to detect crucial connectivity changes in graphs. We propose DeltaCon, a principled, intuitive, and scalable algorithm that assesses the similarity between two graphs on the same nodes (e.g. employees of a company, customers of a mobile carrier). Experiments on various synthetic and real graphs showcase the advantages of our method over existing similarity measures. Finally, we employ DeltaCon to real applications: (a) we classify people to groups of high and low creativity based on their brain connectivity graphs, and (b) do temporal anomaly detection in the who-emails-whom Enron graph.) <|cite_end|>. However, despite capturing the neighborhood in a better way, all these techniques lack the ability to \emph{adapt} the distance function based on the \emph{query context}.
\\
\textbf{Context-Aware Querying:} As discussed in Section~\ref{sec:intro}, the importance of \emph{query context} has been widely studied and appreciated in the domain of text databases facilitating web-searches. With wide-spread research in the use of context for effective querying <|cite_start|> (Reference: Context-aware ranking in web search: The context of a search query often provides a search engine meaningful hints for answering the current query better. Previous studies on context-aware search were either focused on the development of context models or limited to a relatively small scale investigation under a controlled laboratory setting. Particularly, about context-aware ranking for Web search, the following two critical problems are largely remained unsolved. First, how can we take advantage of different types of contexts in ranking? Second, how can we integrate context information into a ranking model? In this paper, we tackle the above two essential problems analytically and empirically. We develop different ranking principles for different types of contexts. Moreover, we adopt a learning-to-rank approach and integrate the ranking principles into a state-of-the-art ranking model by encoding the context information as features of the model. We empirically test our approach using a large search log data set obtained from a major commercial search engine. Our evaluation uses both human judgments and implicit user click data. The experimental results clearly show that our context-aware ranking approach improves the ranking of a commercial search engine which ignores context information. Furthermore, our method outperforms a baseline method which considers context information in ranking.) <|cite_end|> <|cite_start|> (Reference: Predicting short-term interests using activity-based search context: A query considered in isolation offers limited information about a searcher's intent. Query context that considers pre-query activity (e.g., previous queries and page visits), can provide richer information about search intentions. In this paper, we describe a study in which we developed and evaluated user interest models for the current query, its context (from pre-query session activity), and their combination, which we refer to as intent. Using large-scale logs, we evaluate how accurately each model predicts the user's short-term interests under various experimental conditions. In our study we: (i) determine the extent of opportunity for using context to model intent; (ii) compare the utility of different sources of behavioral evidence (queries, search result clicks, and Web page visits) for building predictive interest models, and; (iii) investigate optimally combining the query and its context by learning a model that predicts the context weight for each query. Our findings demonstrate significant opportunity in leveraging contextual information, show that context and source influence predictive accuracy, and show that we can learn a near-optimal combination of the query and context for each query. The findings can inform the design of search systems that leverage contextual information to better understand, model, and serve searchers' information needs.) <|cite_end|> <|cite_start|> (Reference: Towards context-aware search and analysis on social media data: Social media has changed the way we communicate. Social media data capture our social interactions and utterances in machine readable format. Searching and analysing massive and frequently updated social media data brings significant and diverse rewards across many different application domains, from politics and business to social science and epidemiology.
A notable proportion of social media data comes with explicit or implicit spatial annotations, and almost all social media data has temporal metadata. We view social media data as a constant stream of data points, each containing text with spatial and temporal contexts. We identify challenges relevant to each context, which we intend to subject to context aware querying and analysis, specifically including longitudinal analyses on social media archives, spatial keyword search, local intent search, and spatio-temporal intent search. Finally, for each context, emerging applications and further avenues for investigation are discussed.) <|cite_end|> <|cite_start|> (Reference: Context-Aware Querying for Multimodal Search Engines: ) <|cite_end|>, query suggestions and auto-completion <|cite_start|> (Reference: Context-aware query suggestion by mining click-through and session data: Query suggestion plays an important role in improving the usability of search engines. Although some recently proposed methods can make meaningful query suggestions by mining query patterns from search logs, none of them are context-aware - they do not take into account the immediately preceding queries as context in query suggestion. In this paper, we propose a novel context-aware query suggestion approach which is in two steps. In the offine model-learning step, to address data sparseness, queries are summarized into concepts by clustering a click-through bipartite. Then, from session data a concept sequence suffix tree is constructed as the query suggestion model. In the online query suggestion step, a user's search context is captured by mapping the query sequence submitted by the user to a sequence of concepts. By looking up the context in the concept sequence sufix tree, our approach suggests queries to the user in a context-aware manner. We test our approach on a large-scale search log of a commercial search engine containing 1:8 billion search queries, 2:6 billion clicks, and 840 million query sessions. The experimental results clearly show that our approach outperforms two baseline methods in both coverage and quality of suggestions.) <|cite_end|> <|cite_start|> (Reference: Context-sensitive Query Auto-completion: Query auto completion is known to provide poor predictions of the user's query when her input prefix is very short (e.g., one or two characters). In this paper we show that context, such as the user's recent queries, can be used to improve the prediction quality considerably even for such short prefixes. We propose a context-sensitive query auto completion algorithm, NearestCompletion, which outputs the completions of the user's input that are most similar to the context queries. To measure similarity, we represent queries and contexts as high-dimensional term-weighted vectors and resort to cosine similarity. The mapping from queries to vectors is done through a new query expansion technique that we introduce, which expands a query by traversing the query recommendation tree rooted at the query.
In order to evaluate our approach, we performed extensive experimentation over the public AOL query log. We demonstrate that when the recent user's queries are relevant to the current query she is typing, then after typing a single character, NearestCompletion's MRR is 48% higher relative to the MRR of the standard MostPopularCompletion algorithm on average. When the context is irrelevant, however, NearestCompletion's MRR is essentially zero. To mitigate this problem, we propose HybridCompletion, which is a hybrid of NearestCompletion with MostPopularCompletion. HybridCompletion is shown to dominate both NearestCompletion and MostPopularCompletion, achieving a total improvement of 31.5% in MRR relative to MostPopularCompletion on average.) <|cite_end|>and recommendations <|cite_start|> (Reference: A Graph-based model for context-aware recommendation using implicit feedback data: ) <|cite_end|>, this concept has become quotidian for web search. However, its use has been surprisingly absent in the field of (sub-)graph querying. Although <|cite_start|> (Reference: Context-Aware Object Connection Discovery in Large Graphs: Given a large graph and a set of objects, the task of object connection discovery is to find a subgraph that retains the best connection between the objects. Object connection discovery is useful to many important applications such as discovering the connection between different terrorist groups for counter-terrorism operations. Existing work considers only the connection between individual objects; however, in many real problems the objects usually have a context (e.g., a terrorist belongs to a terrorist group). We identify the context for the nodes in a large graph. We partition the graph into a set of communities based on the concept of modularity, where each community becomes naturally the context of the nodes within the community. By considering the context we also significantly improve the efficiency of object connection discovery, since we break down the big graph into much smaller communities. We first compute the best intra-community connection by maximizing the amount of information flow in the answer graph. Then, we extend the connection to the inter-community level by utilizing the community hierarchy relation, while the quality of the inter-community connection is also ensured by modularity. Our experiments show that our algorithm is three orders of magnitude faster than the state-of-the-art algorithm, while the quality of the query answer is comparable.) <|cite_end|>claims to employ the use of context for object connection discovery in large graphs, this claim is not completely true as the idea is to just to use the concept of \emph{modularity} to create communities, which are eventually used as the context of a node. Thus, <|cite_start|> (Reference: Context-Aware Object Connection Discovery in Large Graphs: Given a large graph and a set of objects, the task of object connection discovery is to find a subgraph that retains the best connection between the objects. Object connection discovery is useful to many important applications such as discovering the connection between different terrorist groups for counter-terrorism operations. Existing work considers only the connection between individual objects; however, in many real problems the objects usually have a context (e.g., a terrorist belongs to a terrorist group). We identify the context for the nodes in a large graph. We partition the graph into a set of communities based on the concept of modularity, where each community becomes naturally the context of the nodes within the community. By considering the context we also significantly improve the efficiency of object connection discovery, since we break down the big graph into much smaller communities. We first compute the best intra-community connection by maximizing the amount of information flow in the answer graph. Then, we extend the connection to the inter-community level by utilizing the community hierarchy relation, while the quality of the inter-community connection is also ensured by modularity. Our experiments show that our algorithm is three orders of magnitude faster than the state-of-the-art algorithm, while the quality of the query answer is comparable.) <|cite_end|>just captures the structural/topological context and is clearly not extensible to employ the use of node labels present in labeled graphs to understand the context. Moreover, the problem of object connection discovery is simpler when compared to that of (sub-)graph querying. In sum, this work (\emph{CGQ}) presents the first-ever effort in bringing the power of context to facilitate (sub)-graph querying in large networks.
Note that although literature has witnessed a plethora of indexing techniques for (sub-)graph querying <|cite_start|> (Reference: Neighborhood Based Fast Graph Search in Large Networks: Complex social and information network search becomes important with a variety of applications. In the core of these applications, lies a common and critical problem: Given a labeled network and a query graph, how to efficiently search the query graph in the target network. The presence of noise and the incomplete knowledge about the structure and content of the target network make it unrealistic to find an exact match. Rather, it is more appealing to find the top-k approximate matches.
In this paper, we propose a neighborhood-based similarity measure that could avoid costly graph isomorphism and edit distance computation. Under this new measure, we prove that subgraph similarity search is NP hard, while graph similarity match is polynomial. By studying the principles behind this measure, we found an information propagation model that is able to convert a large network into a set of multidimensional vectors, where sophisticated indexing and similarity search algorithms are available. The proposed method, called Ness (Neighborhood Based Similarity Search), is appropriate for graphs with low automorphism and high noise, which are common in many social and information networks. Ness is not only efficient, but also robust against structural noise and information loss. Empirical results show that it can quickly and accurately find high-quality matches in large networks, with negligible cost.) <|cite_end|> <|cite_start|> (Reference: NeMa: Fast graph search with label similarity: It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques.) <|cite_end|> <|cite_start|> (Reference: gStore: Answering SPARQL Queries via Subgraph Matching: Due to the increasing use of RDF data, efficient processing of SPARQL queries over RDF datasets has become an important issue. However, existing solutions suffer from two limitations: 1) they cannot answer SPARQL queries with wildcards in a scalable manner; and 2) they cannot handle frequent updates in RDF repositories efficiently. Thus, most of them have to reprocess the dataset from scratch. In this paper, we propose a graph-based approach to store and query RDF data. Rather than mapping RDF triples into a relational database as most existing methods do, we store RDF data as a large graph. A SPARQL query is then converted into a corresponding subgraph matching query. In order to speed up query processing, we develop a novel index, together with some effective pruning rules and efficient search algorithms. Our method can answer exact SPARQL queries and queries with wildcards in a uniform manner. We also propose an effective maintenance algorithm to handle online updates over RDF repositories. Extensive experiments confirm the efficiency and effectiveness of our solution.) <|cite_end|> <|cite_start|> (Reference: SAGA: A Subgraph Matching Tool for Biological Graphs: MOTIVATION
With the rapid increase in the availability of biological graph datasets, there is a growing need for effective and efficient graph querying methods. Due to the noisy and incomplete characteristics of these datasets, exact graph matching methods have limited use and approximate graph matching methods are required. Unfortunately, existing graph matching methods are too restrictive as they only allow exact or near exact graph matching. This paper presents a novel approximate graph matching technique called SAGA. This technique employs a flexible model for computing graph similarity, which allows for node gaps, node mismatches and graph structural differences. SAGA employs an indexing technique that allows it to efficiently evaluate queries even against large graph datasets.
RESULTS
SAGA has been used to query biological pathways and literature datasets, which has revealed interesting similarities between distinct pathways that cannot be found by existing methods. These matches associate seemingly unrelated biological processes, connect studies in different sub-areas of biomedical research and thus pose hypotheses for new discoveries. SAGA is also orders of magnitude faster than existing methods.
AVAILABILITY
SAGA can be accessed freely via the web at http://www.eecs.umich.edu/saga. Binaries are also freely available at this website.) <|cite_end|> | [
"<|reference_start|> Facebook graph search: Το χαρακτηριστικό του Facebook Graph Search, eίναι πeρίπου όπως τη μηχανή αναζήτησης της Google, ... <|reference_end|>",
"<|reference_start|> Neighborhood Based Fast Graph Search in Large Networks: Complex social and information network search becomes important with a variety of applications. In the core of these applications, lies a common and critical problem: Given a labeled network and a query graph, how to efficiently search the query graph in the target network. The presence of noise and the incomplete knowledge about the structure and content of the target network make it unrealistic to find an exact match. Rather, it is more appealing to find the top-k approximate matches.\n In this paper, we propose a neighborhood-based similarity measure that could avoid costly graph isomorphism and edit distance computation. Under this new measure, we prove that subgraph similarity search is NP hard, while graph similarity match is polynomial. By studying the principles behind this measure, we found an information propagation model that is able to convert a large network into a set of multidimensional vectors, where sophisticated indexing and similarity search algorithms are available. The proposed method, called Ness (Neighborhood Based Similarity Search), is appropriate for graphs with low automorphism and high noise, which are common in many social and information networks. Ness is not only efficient, but also robust against structural noise and information loss. Empirical results show that it can quickly and accurately find high-quality matches in large networks, with negligible cost. <|reference_end|>",
"<|reference_start|> NeMa: Fast graph search with label similarity: It is increasingly common to find real-life data represented as networks of labeled, heterogeneous entities. To query these networks, one often needs to identify the matches of a given query graph in a (typically large) network modeled as a target graph. Due to noise and the lack of fixed schema in the target graph, the query graph can substantially differ from its matches in the target graph in both structure and node labels, thus bringing challenges to the graph querying tasks. In this paper, we propose NeMa (Network Match), a neighborhood-based subgraph matching technique for querying real-life networks. (1) To measure the quality of the match, we propose a novel subgraph matching cost metric that aggregates the costs of matching individual nodes, and unifies both structure and node label similarities. (2) Based on the metric, we formulate the minimum cost subgraph matching problem. Given a query graph and a target graph, the problem is to identify the (top-k) matches of the query graph with minimum costs in the target graph. We show that the problem is NP-hard, and also hard to approximate. (3) We propose a heuristic algorithm for solving the problem based on an inference model. In addition, we propose optimization techniques to improve the efficiency of our method. (4) We empirically verify that NeMa is both effective and efficient compared to the keyword search and various state-of-the-art graph querying techniques. <|reference_end|>",
"<|reference_start|> DELTACON: A Principled Massive-Graph Similarity Function: How much did a network change since yesterday? How different is the wiring between Bob's brain (a left-handed male) and Alice's brain (a right-handed female)? Graph similarity with known node correspondence, i.e. the detection of changes in the connectivity of graphs, arises in numerous settings. In this work, we formally state the axioms and desired properties of the graph similarity functions, and evaluate when state-of-the-art methods fail to detect crucial connectivity changes in graphs. We propose DeltaCon, a principled, intuitive, and scalable algorithm that assesses the similarity between two graphs on the same nodes (e.g. employees of a company, customers of a mobile carrier). Experiments on various synthetic and real graphs showcase the advantages of our method over existing similarity measures. Finally, we employ DeltaCon to real applications: (a) we classify people to groups of high and low creativity based on their brain connectivity graphs, and (b) do temporal anomaly detection in the who-emails-whom Enron graph. <|reference_end|>"
] | [
5,
14,
15,
39
] | {"<|cite_1|>": "ss-1379308", "<|cite_2|>": "ss-1968179", "<|cite_3|>": "arxiv-32170", "<|cite_4|>": "ss-733437", "<|cite_5|>": "ss-1048758", "<|cite_6|>": "ss-1500092", "<|cite_7|>": "ss-1500093", "<|cite_8|>": "ss-1422063", "<|multi_cite_9_1|>": "ss-1363771", "<|multi_cite_9_2|>": "ss-1657957", "<|multi_cite_9_3|>": "ss-1422063", "<|multi_cite_9_4|>": "ss-725548", "<|cite_10|>": "ss-725548", "<|multi_cite_11_1|>": "ss-1363771", "<|multi_cite_11_2|>": "ss-1657957", "<|multi_cite_12_1|>": "ss-1363771", "<|multi_cite_12_2|>": "ss-1657957", "<|multi_cite_12_3|>": "arxiv-44712", "<|multi_cite_12_4|>": "ss-1941525", "<|multi_cite_13_1|>": "ss-725548", "<|multi_cite_13_2|>": "ss-1422063", "<|multi_cite_13_3|>": "ss-1363771", "<|cite_15|>": "ss-1379308", "<|cite_16|>": "ss-1379308", "<|multi_cite_17_1|>": "ss-725548", "<|multi_cite_17_2|>": "ss-1422063", "<|multi_cite_17_3|>": "ss-1363771", "<|multi_cite_18_1|>": "ss-1941525", "<|multi_cite_18_2|>": "ss-1620931", "<|multi_cite_19_1|>": "ss-1742295", "<|multi_cite_19_2|>": "ss-1844444", "<|cite_20|>": "ss-711495", "<|multi_cite_21_1|>": "ss-1609205", "<|multi_cite_21_2|>": "ss-711491", "<|multi_cite_21_3|>": "ss-711487", "<|multi_cite_22_1|>": "ss-711491", "<|multi_cite_22_2|>": "ss-711487", "<|cite_23|>": "ss-1657957", "<|cite_24|>": "ss-1363771", "<|cite_25|>": "arxiv-44712", "<|multi_cite_26_1|>": "ss-1996866", "<|multi_cite_26_2|>": "ss-1158945", "<|multi_cite_26_3|>": "ss-1146227", "<|multi_cite_26_4|>": "ss-1500094", "<|multi_cite_27_1|>": "ss-1103288", "<|multi_cite_27_2|>": "ss-856791", "<|cite_28|>": "ss-1500095", "<|cite_29|>": "ss-1156252", "<|cite_30|>": "ss-1156252", "<|multi_cite_31_1|>": "ss-1657957", "<|multi_cite_31_2|>": "ss-1363771", "<|multi_cite_31_3|>": "ss-1003853", "<|multi_cite_31_4|>": "ss-711491", "<|multi_cite_31_5|>": "ss-711495", "<|multi_cite_31_6|>": "ss-711487", "<|multi_cite_31_8|>": "arxiv-27457", "<|multi_cite_31_9|>": "ss-729752", "<|multi_cite_31_10|>": "ss-729737", "<|multi_cite_31_11|>": "ss-1609205", "<|multi_cite_31_12|>": "ss-711502", "<|multi_cite_31_13|>": "ss-729736", "<|cite_32|>": "ss-729757"} |
2303.14474-0 | <|paper_start|> Title: 3Mformer: Multi-order Multi-mode Transformer for Skeletal Action Recognition
Abstract: 3Mformer: Multi-order Multi-mode Transformer for Skeletal Action Recognition: Many skeletal action recognition models use GCNs to represent the human body by 3D body joints connected body parts. GCNs aggregate one- or few-hop graph neighbourhoods, and ignore the dependency between not linked body joints. We propose to form hypergraph to model hyper-edges between graph nodes (e.g., third- and fourth-order hyper-edges capture three and four nodes) which help capture higher-order motion patterns of groups of body joints. We split action sequences into temporal blocks, Higher-order Transformer (HoT) produces embeddings of each temporal block based on (i) the body joints, (ii) pairwise links of body joints and (iii) higher-order hyper-edges of skeleton body joints. We combine such HoT embeddings of hyper-edges of orders 1, ..., r by a novel Multi-order Multi-mode Transformer (3Mformer) with two modules whose order can be exchanged to achieve coupled-mode attention on coupled-mode tokens based on 'channel-temporal block', 'order-channel-body joint', 'channel-hyper-edge (any order)' and 'channel-only' pairs. The first module, called Multi-order Pooling (MP), additionally learns weighted aggregation along the hyper-edge mode, whereas the second module, Temporal block Pooling (TP), aggregates along the temporal block mode. Our end-to-end trainable network yields state-of-the-art results compared to GCN-, transformer- and hypergraph-based counterparts.
Introduction
Action Recognition has applications in video surveillance, human-computer interaction, sports analysis, and virtual reality <|cite_start|> (Reference: An Analysis of the Application of Simplified Silhouette to the Evaluation of k-means Clustering Validity: ) <|cite_end|> <|cite_start|> (Reference: Loss Switching Fusion with Similarity Search for Video Classification: From video streaming to security and surveillance applications, video data play an important role in our daily living today. However, managing a large amount of video data and retrieving the most useful information for the user remain a challenging task. In this paper, we propose a novel video classification system that would benefit the scene understanding task. We define our classification problem as classifying background and foreground motions using the same feature representation for outdoor scenes. This means that the feature representation needs to be robust enough and adaptable to different classification tasks. We propose a lightweight Loss Switching Fusion Network (LSFNet) for the fusion of spatiotemporal descriptors and a similarity search scheme with soft voting to boost the classification performance. The proposed system has a variety of potential applications such as content-based video clustering, video filtering, etc. Evaluation results on two private industry datasets show that our system is robust in both classifying different background motions and detecting human motions from these background motions.) <|cite_end|> <|cite_start|> (Reference: A Comparative Review of Recent Kinect-based Action Recognition Algorithms: Video-based human action recognition is currently one of the most active research areas in computer vision. Various research studies indicate that the performance of action recognition is highly dependent on the type of features being extracted and how the actions are represented. Since the release of the Kinect camera, a large number of Kinect-based human action recognition techniques have been proposed in the literature. However, there still does not exist a thorough comparison of these Kinect-based techniques under the grouping of feature types, such as handcrafted versus deep learning features and depth-based versus skeleton-based features. In this paper, we analyze and compare ten recent Kinect-based algorithms for both cross-subject action recognition and cross-view action recognition using six benchmark datasets. In addition, we have implemented and improved some of these techniques and included their variants in the comparison. Our experiments show that the majority of methods perform better on cross-subject action recognition than cross-view action recognition, that skeleton-based features are more robust for cross-view recognition than depth-based features, and that deep learning features are suitable for large datasets.) <|cite_end|> <|cite_start|> (Reference: Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition With CNNs: In this paper, we revive the use of old-fashioned handcrafted video representations for action recognition and put new life into these techniques via a CNN-based hallucination step. Despite of the use of RGB and optical flow frames, the I3D model (amongst others) thrives on combining its output with the Improved Dense Trajectory (IDT) and extracted with its low-level video descriptors encoded via Bag-of-Words (BoW) and Fisher Vectors (FV). Such a fusion of CNNs and handcrafted representations is time-consuming due to pre-processing, descriptor extraction, encoding and tuning parameters. Thus, we propose an end-to-end trainable network with streams which learn the IDT-based BoW/FV representations at the training stage and are simple to integrate with the I3D model. Specifically, each stream takes I3D feature maps ahead of the last 1D conv. layer and learns to `translate' these maps to BoW/FV representations. Thus, our model can hallucinate and use such synthesized BoW/FV representations at the testing stage. We show that even features of the entire I3D optical flow stream can be hallucinated thus simplifying the pipeline. Our model saves 20-55h of computations and yields state-of-the-art results on four publicly available datasets.) <|cite_end|> <|cite_start|> (Reference: Tensor Representations for Action Recognition: Human actions in video sequences are characterized by the complex interplay between spatial features and their temporal dynamics. In this paper, we propose novel tensor representations for compactly capturing such higher-order relationships between visual features for the task of action recognition. We propose two tensor-based feature representations, viz. (i) sequence compatibility kernel (SCK) and (ii) dynamics compatibility kernel (DCK). SCK builds on the spatio-temporal correlations between features, whereas DCK explicitly models the action dynamics of a sequence. We also explore generalization of SCK, coined SCK(+), that operates on subsequences to capture the local-global interplay of correlations, which can incorporate multi-modal inputs e.g., skeleton 3D body-joints and per-frame classifier scores obtained from deep learning models trained on videos. We introduce linearization of these kernels that lead to compact and fast descriptors. We provide experiments on (i) 3D skeleton action sequences, (ii) fine-grained video sequences, and (iii) standard non-fine-grained videos. As our final representations are tensors that capture higher-order relationships of features, they relate to co-occurrences for robust fine-grained recognition. We use higher-order tensors and so-called Eigenvalue Power Normalization (EPN) which have been long speculated to perform spectral detection of higher-order occurrences, thus detecting fine-grained relationships of features rather than merely count features in action sequences. We prove that a tensor of order r, built from Z* dimensional features, coupled with EPN indeed detects if at least one higher-order occurrence is `projected' into one of its binom(Z*,r) subspaces of dim. r represented by the tensor, thus forming a Tensor Power Normalization metric endowed with binom(Z*,r) such `detectors'.) <|cite_end|> <|cite_start|> (Reference: High-order Tensor Pooling with Attention for Action Recognition: We aim at capturing high-order statistics of feature vectors formed by a neural network, and propose end-to-end second- and higher-order pooling to form a tensor descriptor. Tensor descriptors require a robust similarity measure due to low numbers of aggregated vectors and the burstiness phenomenon, when a given feature appears more/less frequently than statistically expected. The Heat Diffusion Process (HDP) on a graph Laplacian is closely related to the Eigenvalue Power Normalization (EPN) of the covariance/autocorrelation matrix, whose inverse forms a loopy graph Laplacian. We show that the HDP and the EPN play the same role, i.e., to boost or dampen the magnitude of the eigenspectrum thus preventing the burstiness. We equip higher-order tensors with EPN which acts as a spectral detector of higher-order occurrences to prevent burstiness. We also prove that for a tensor of order r built from d dimensional feature descriptors, such a detector gives the likelihood if at least one higher-order occurrence is 'projected' into one of binom(d,r) subspaces represented by the tensor; thus forming a tensor power normalization metric endowed with binom(d,r) such 'detectors'. For experimental contributions, we apply several second- and higher-order pooling variants to action recognition, provide previously not presented comparisons of such pooling variants, and show state-of-the-art results on HMDB-51, YUP++ and MPII Cooking Activities.) <|cite_end|> <|cite_start|> (Reference: Self-supervising Action Recognition by Statistical Moment and Subspace Descriptors: In this paper, we build on a concept of self-supervision by taking RGB frames as input to learn to predict both action concepts and auxiliary descriptors e.g., object descriptors. So-called hallucination streams are trained to predict auxiliary cues, simultaneously fed into classification layers, and then hallucinated at the testing stage to aid network. We design and hallucinate two descriptors, one leveraging four popular object detectors applied to training videos, and the other leveraging image- and video-level saliency detectors. The first descriptor encodes the detector- and ImageNet-wise class prediction scores, confidence scores, and spatial locations of bounding boxes and frame indexes to capture the spatio-temporal distribution of features per video. Another descriptor encodes spatio-angular gradient distributions of saliency maps and intensity patterns. Inspired by the characteristic function of the probability distribution, we capture four statistical moments on the above intermediate descriptors. As numbers of coefficients in the mean, covariance, coskewness and cokurtotsis grow linearly, quadratically, cubically and quartically w.r.t. the dimension of feature vectors, we describe the covariance matrix by its leading n' eigenvectors (so-called subspace) and we capture skewness/kurtosis rather than costly coskewness/cokurtosis. We obtain state of the art on five popular datasets such as Charades and EPIC-Kitchens.) <|cite_end|> <|cite_start|> (Reference: 3D Skeleton-based Few-shot Action Recognition with JEANIE is not so Naïve: In this paper, we propose a Few-shot Learning pipeline for 3D skeleton-based action recognition by Joint tEmporal and cAmera viewpoiNt alIgnmEnt (JEANIE). To factor out misalignment between query and support sequences of 3D body joints, we propose an advanced variant of Dynamic Time Warping which jointly models each smooth path between the query and support frames to achieve simultaneously the best alignment in the temporal and simulated camera viewpoint spaces for end-to-end learning under the limited few-shot training data. Sequences are encoded with a temporal block encoder based on Simple Spectral Graph Convolution, a lightweight linear Graph Neural Network backbone (we also include a setting with a transformer). Finally, we propose a similarity-based loss which encourages the alignment of sequences of the same class while preventing the alignment of unrelated sequences. We demonstrate state-of-the-art results on NTU-60, NTU-120, Kinetics-skeleton and UWA3D Multiview Activity II.) <|cite_end|> <|cite_start|> (Reference: Fusing Higher-order Features in Graph Neural Networks for Skeleton-based Action Recognition: Skeleton sequences are lightweight and compact, and thus are ideal candidates for action recognition on edge devices. Recent skeleton-based action recognition methods extract features from 3D joint coordinates as spatial-temporal cues, using these representations in a graph neural network for feature fusion to boost recognition performance. The use of first- and second-order features, i.e., joint and bone representations, has led to high accuracy. Nonetheless, many models are still confused by actions that have similar motion trajectories. To address these issues, we propose fusing higher-order features in the form of angular encoding into modern architectures to robustly capture the relationships between joints and body parts. This simple fusion with popular spatial-temporal graph neural networks achieves new state-of-the-art accuracy in two large benchmarks, including NTU60 and NTU120, while employing fewer parameters and reduced run time. Our source code is publicly available at: https://github.com/ZhenyueQin/Angular-Skeleton-Encoding.) <|cite_end|> <|cite_start|> (Reference: Uncertainty-DTW for Time Series and Sequences: Dynamic Time Warping (DTW) is used for matching pairs of sequences and celebrated in applications such as forecasting the evolution of time series, clustering time series or even matching sequence pairs in few-shot action recognition. The transportation plan of DTW contains a set of paths; each path matches frames between two sequences under a varying degree of time warping, to account for varying temporal intra-class dynamics of actions. However, as DTW is the smallest distance among all paths, it may be affected by the feature uncertainty which varies across time steps/frames. Thus, in this paper, we propose to model the so-called aleatoric uncertainty of a differentiable (soft) version of DTW. To this end, we model the heteroscedastic aleatoric uncertainty of each path by the product of likelihoods from Normal distributions, each capturing variance of pair of frames. (The path distance is the sum of base distances between features of pairs of frames of the path.) The Maximum Likelihood Estimation (MLE) applied to a path yields two terms: (i) a sum of Euclidean distances weighted by the variance inverse, and (ii) a sum of log-variance regularization terms. Thus, our uncertainty-DTW is the smallest weighted path distance among all paths, and the regularization term (penalty for the high uncertainty) is the aggregate of log-variances along the path. The distance and the regularization term can be used in various objectives. We showcase forecasting the evolution of time series, estimating the Fr\'echet mean of time series, and supervised/unsupervised few-shot action recognition of the articulated human 3D body joints.) <|cite_end|> <|cite_start|> (Reference: Temporal-Viewpoint Transportation Plan for Skeletal Few-shot Action Recognition: We propose a Few-shot Learning pipeline for 3D skeleton-based action recognition by Joint tEmporal and cAmera viewpoiNt alIgnmEnt (JEANIE). To factor out misalignment between query and support sequences of 3D body joints, we propose an advanced variant of Dynamic Time Warping which jointly models each smooth path between the query and support frames to achieve simultaneously the best alignment in the temporal and simulated camera viewpoint spaces for end-to-end learning under the limited few-shot training data. Sequences are encoded with a temporal block encoder based on Simple Spectral Graph Convolution, a lightweight linear Graph Neural Network backbone. We also include a setting with a transformer. Finally, we propose a similarity-based loss which encourages the alignment of sequences of the same class while preventing the alignment of unrelated sequences. We show state-of-the-art results on NTU-60, NTU-120, Kinetics-skeleton and UWA3D Multiview Activity II.) <|cite_end|>.
\lei{Different from video-based methods which mainly focus on modeling the spatio-temporal representations from RGB frames and/or optical flow <|cite_start|> (Reference: An Analysis of the Application of Simplified Silhouette to the Evaluation of k-means Clustering Validity: ) <|cite_end|> <|cite_start|> (Reference: Loss Switching Fusion with Similarity Search for Video Classification: From video streaming to security and surveillance applications, video data play an important role in our daily living today. However, managing a large amount of video data and retrieving the most useful information for the user remain a challenging task. In this paper, we propose a novel video classification system that would benefit the scene understanding task. We define our classification problem as classifying background and foreground motions using the same feature representation for outdoor scenes. This means that the feature representation needs to be robust enough and adaptable to different classification tasks. We propose a lightweight Loss Switching Fusion Network (LSFNet) for the fusion of spatiotemporal descriptors and a similarity search scheme with soft voting to boost the classification performance. The proposed system has a variety of potential applications such as content-based video clustering, video filtering, etc. Evaluation results on two private industry datasets show that our system is robust in both classifying different background motions and detecting human motions from these background motions.) <|cite_end|> <|cite_start|> (Reference: A Comparative Review of Recent Kinect-based Action Recognition Algorithms: Video-based human action recognition is currently one of the most active research areas in computer vision. Various research studies indicate that the performance of action recognition is highly dependent on the type of features being extracted and how the actions are represented. Since the release of the Kinect camera, a large number of Kinect-based human action recognition techniques have been proposed in the literature. However, there still does not exist a thorough comparison of these Kinect-based techniques under the grouping of feature types, such as handcrafted versus deep learning features and depth-based versus skeleton-based features. In this paper, we analyze and compare ten recent Kinect-based algorithms for both cross-subject action recognition and cross-view action recognition using six benchmark datasets. In addition, we have implemented and improved some of these techniques and included their variants in the comparison. Our experiments show that the majority of methods perform better on cross-subject action recognition than cross-view action recognition, that skeleton-based features are more robust for cross-view recognition than depth-based features, and that deep learning features are suitable for large datasets.) <|cite_end|> <|cite_start|> (Reference: Hallucinating IDT Descriptors and I3D Optical Flow Features for Action Recognition With CNNs: In this paper, we revive the use of old-fashioned handcrafted video representations for action recognition and put new life into these techniques via a CNN-based hallucination step. Despite of the use of RGB and optical flow frames, the I3D model (amongst others) thrives on combining its output with the Improved Dense Trajectory (IDT) and extracted with its low-level video descriptors encoded via Bag-of-Words (BoW) and Fisher Vectors (FV). Such a fusion of CNNs and handcrafted representations is time-consuming due to pre-processing, descriptor extraction, encoding and tuning parameters. Thus, we propose an end-to-end trainable network with streams which learn the IDT-based BoW/FV representations at the training stage and are simple to integrate with the I3D model. Specifically, each stream takes I3D feature maps ahead of the last 1D conv. layer and learns to `translate' these maps to BoW/FV representations. Thus, our model can hallucinate and use such synthesized BoW/FV representations at the testing stage. We show that even features of the entire I3D optical flow stream can be hallucinated thus simplifying the pipeline. Our model saves 20-55h of computations and yields state-of-the-art results on four publicly available datasets.) <|cite_end|> <|cite_start|> (Reference: Self-supervising Action Recognition by Statistical Moment and Subspace Descriptors: In this paper, we build on a concept of self-supervision by taking RGB frames as input to learn to predict both action concepts and auxiliary descriptors e.g., object descriptors. So-called hallucination streams are trained to predict auxiliary cues, simultaneously fed into classification layers, and then hallucinated at the testing stage to aid network. We design and hallucinate two descriptors, one leveraging four popular object detectors applied to training videos, and the other leveraging image- and video-level saliency detectors. The first descriptor encodes the detector- and ImageNet-wise class prediction scores, confidence scores, and spatial locations of bounding boxes and frame indexes to capture the spatio-temporal distribution of features per video. Another descriptor encodes spatio-angular gradient distributions of saliency maps and intensity patterns. Inspired by the characteristic function of the probability distribution, we capture four statistical moments on the above intermediate descriptors. As numbers of coefficients in the mean, covariance, coskewness and cokurtotsis grow linearly, quadratically, cubically and quartically w.r.t. the dimension of feature vectors, we describe the covariance matrix by its leading n' eigenvectors (so-called subspace) and we capture skewness/kurtosis rather than costly coskewness/cokurtosis. We obtain state of the art on five popular datasets such as Charades and EPIC-Kitchens.) <|cite_end|> <|cite_start|> (Reference: High-order Tensor Pooling with Attention for Action Recognition: We aim at capturing high-order statistics of feature vectors formed by a neural network, and propose end-to-end second- and higher-order pooling to form a tensor descriptor. Tensor descriptors require a robust similarity measure due to low numbers of aggregated vectors and the burstiness phenomenon, when a given feature appears more/less frequently than statistically expected. The Heat Diffusion Process (HDP) on a graph Laplacian is closely related to the Eigenvalue Power Normalization (EPN) of the covariance/autocorrelation matrix, whose inverse forms a loopy graph Laplacian. We show that the HDP and the EPN play the same role, i.e., to boost or dampen the magnitude of the eigenspectrum thus preventing the burstiness. We equip higher-order tensors with EPN which acts as a spectral detector of higher-order occurrences to prevent burstiness. We also prove that for a tensor of order r built from d dimensional feature descriptors, such a detector gives the likelihood if at least one higher-order occurrence is 'projected' into one of binom(d,r) subspaces represented by the tensor; thus forming a tensor power normalization metric endowed with binom(d,r) such 'detectors'. For experimental contributions, we apply several second- and higher-order pooling variants to action recognition, provide previously not presented comparisons of such pooling variants, and show state-of-the-art results on HMDB-51, YUP++ and MPII Cooking Activities.) <|cite_end|>, skeleton sequences, representing a spatio-temporal evolution of 3D body joints, have been proven robust against sensor noises and effective in action recognition while being computationally and storage efficient <|cite_start|> (Reference: An Analysis of the Application of Simplified Silhouette to the Evaluation of k-means Clustering Validity: ) <|cite_end|> <|cite_start|> (Reference: A Comparative Review of Recent Kinect-based Action Recognition Algorithms: Video-based human action recognition is currently one of the most active research areas in computer vision. Various research studies indicate that the performance of action recognition is highly dependent on the type of features being extracted and how the actions are represented. Since the release of the Kinect camera, a large number of Kinect-based human action recognition techniques have been proposed in the literature. However, there still does not exist a thorough comparison of these Kinect-based techniques under the grouping of feature types, such as handcrafted versus deep learning features and depth-based versus skeleton-based features. In this paper, we analyze and compare ten recent Kinect-based algorithms for both cross-subject action recognition and cross-view action recognition using six benchmark datasets. In addition, we have implemented and improved some of these techniques and included their variants in the comparison. Our experiments show that the majority of methods perform better on cross-subject action recognition than cross-view action recognition, that skeleton-based features are more robust for cross-view recognition than depth-based features, and that deep learning features are suitable for large datasets.) <|cite_end|> <|cite_start|> (Reference: Tensor Representations for Action Recognition: Human actions in video sequences are characterized by the complex interplay between spatial features and their temporal dynamics. In this paper, we propose novel tensor representations for compactly capturing such higher-order relationships between visual features for the task of action recognition. We propose two tensor-based feature representations, viz. (i) sequence compatibility kernel (SCK) and (ii) dynamics compatibility kernel (DCK). SCK builds on the spatio-temporal correlations between features, whereas DCK explicitly models the action dynamics of a sequence. We also explore generalization of SCK, coined SCK(+), that operates on subsequences to capture the local-global interplay of correlations, which can incorporate multi-modal inputs e.g., skeleton 3D body-joints and per-frame classifier scores obtained from deep learning models trained on videos. We introduce linearization of these kernels that lead to compact and fast descriptors. We provide experiments on (i) 3D skeleton action sequences, (ii) fine-grained video sequences, and (iii) standard non-fine-grained videos. As our final representations are tensors that capture higher-order relationships of features, they relate to co-occurrences for robust fine-grained recognition. We use higher-order tensors and so-called Eigenvalue Power Normalization (EPN) which have been long speculated to perform spectral detection of higher-order occurrences, thus detecting fine-grained relationships of features rather than merely count features in action sequences. We prove that a tensor of order r, built from Z* dimensional features, coupled with EPN indeed detects if at least one higher-order occurrence is `projected' into one of its binom(Z*,r) subspaces of dim. r represented by the tensor, thus forming a Tensor Power Normalization metric endowed with binom(Z*,r) such `detectors'.) <|cite_end|> <|cite_start|> (Reference: 3D Skeleton-based Few-shot Action Recognition with JEANIE is not so Naïve: In this paper, we propose a Few-shot Learning pipeline for 3D skeleton-based action recognition by Joint tEmporal and cAmera viewpoiNt alIgnmEnt (JEANIE). To factor out misalignment between query and support sequences of 3D body joints, we propose an advanced variant of Dynamic Time Warping which jointly models each smooth path between the query and support frames to achieve simultaneously the best alignment in the temporal and simulated camera viewpoint spaces for end-to-end learning under the limited few-shot training data. Sequences are encoded with a temporal block encoder based on Simple Spectral Graph Convolution, a lightweight linear Graph Neural Network backbone (we also include a setting with a transformer). Finally, we propose a similarity-based loss which encourages the alignment of sequences of the same class while preventing the alignment of unrelated sequences. We demonstrate state-of-the-art results on NTU-60, NTU-120, Kinetics-skeleton and UWA3D Multiview Activity II.) <|cite_end|> <|cite_start|> (Reference: Fusing Higher-order Features in Graph Neural Networks for Skeleton-based Action Recognition: Skeleton sequences are lightweight and compact, and thus are ideal candidates for action recognition on edge devices. Recent skeleton-based action recognition methods extract features from 3D joint coordinates as spatial-temporal cues, using these representations in a graph neural network for feature fusion to boost recognition performance. The use of first- and second-order features, i.e., joint and bone representations, has led to high accuracy. Nonetheless, many models are still confused by actions that have similar motion trajectories. To address these issues, we propose fusing higher-order features in the form of angular encoding into modern architectures to robustly capture the relationships between joints and body parts. This simple fusion with popular spatial-temporal graph neural networks achieves new state-of-the-art accuracy in two large benchmarks, including NTU60 and NTU120, while employing fewer parameters and reduced run time. Our source code is publicly available at: https://github.com/ZhenyueQin/Angular-Skeleton-Encoding.) <|cite_end|> <|cite_start|> (Reference: Uncertainty-DTW for Time Series and Sequences: Dynamic Time Warping (DTW) is used for matching pairs of sequences and celebrated in applications such as forecasting the evolution of time series, clustering time series or even matching sequence pairs in few-shot action recognition. The transportation plan of DTW contains a set of paths; each path matches frames between two sequences under a varying degree of time warping, to account for varying temporal intra-class dynamics of actions. However, as DTW is the smallest distance among all paths, it may be affected by the feature uncertainty which varies across time steps/frames. Thus, in this paper, we propose to model the so-called aleatoric uncertainty of a differentiable (soft) version of DTW. To this end, we model the heteroscedastic aleatoric uncertainty of each path by the product of likelihoods from Normal distributions, each capturing variance of pair of frames. (The path distance is the sum of base distances between features of pairs of frames of the path.) The Maximum Likelihood Estimation (MLE) applied to a path yields two terms: (i) a sum of Euclidean distances weighted by the variance inverse, and (ii) a sum of log-variance regularization terms. Thus, our uncertainty-DTW is the smallest weighted path distance among all paths, and the regularization term (penalty for the high uncertainty) is the aggregate of log-variances along the path. The distance and the regularization term can be used in various objectives. We showcase forecasting the evolution of time series, estimating the Fr\'echet mean of time series, and supervised/unsupervised few-shot action recognition of the articulated human 3D body joints.) <|cite_end|> <|cite_start|> (Reference: Temporal-Viewpoint Transportation Plan for Skeletal Few-shot Action Recognition: We propose a Few-shot Learning pipeline for 3D skeleton-based action recognition by Joint tEmporal and cAmera viewpoiNt alIgnmEnt (JEANIE). To factor out misalignment between query and support sequences of 3D body joints, we propose an advanced variant of Dynamic Time Warping which jointly models each smooth path between the query and support frames to achieve simultaneously the best alignment in the temporal and simulated camera viewpoint spaces for end-to-end learning under the limited few-shot training data. Sequences are encoded with a temporal block encoder based on Simple Spectral Graph Convolution, a lightweight linear Graph Neural Network backbone. We also include a setting with a transformer. Finally, we propose a similarity-based loss which encourages the alignment of sequences of the same class while preventing the alignment of unrelated sequences. We show state-of-the-art results on NTU-60, NTU-120, Kinetics-skeleton and UWA3D Multiview Activity II.) <|cite_end|>.}
The skeleton data is usually obtained by either localization of 2D/3D coordinates of human body joints with the depth sensors or pose estimation algorithms applied to videos <|cite_start|> (Reference: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields: We present an approach to efficiently detect the 2D pose of multiple people in an image. The approach uses a nonparametric representation, which we refer to as Part Affinity Fields (PAFs), to learn to associate body parts with individuals in the image. The architecture encodes global context, allowing a greedy bottom-up parsing step that maintains high accuracy while achieving realtime performance, irrespective of the number of people in the image. The architecture is designed to jointly learn part locations and their association via two branches of the same sequential prediction process. Our method placed first in the inaugural COCO 2016 keypoints challenge, and significantly exceeds the previous state-of-the-art result on the MPII Multi-Person benchmark, both in performance and efficiency.) <|cite_end|>. Skeleton sequences enjoy (i) simple structural connectivity of skeletal graph and (ii) temporal continuity of 3D body joints evolving in time. While temporal evolution of each body joint is highly informative, embeddings of separate body joints are insensitive to relations between body parts. Moreover, while the links between adjacent 3D body joints (following the structural connectivity) are very informative as they model relations, these links represent highly correlated nodes in the sense of their temporal evolution. Thus, modeling larger groups of 3D body joints as hyper-edges can capture more complex spatio-temporal motion dynamics.
The existing graph-based models mainly differ by how they handle temporal information.
\lei{Graph Neural Network (GNN) may encode spatial neighborhood of the node followed by aggregation by LSTM <|cite_start|> (Reference: An Attention Enhanced Graph Convolutional LSTM Network for Skeleton-Based Action Recognition: Skeleton-based action recognition is an important task that requires the adequate understanding of movement characteristics of a human action from the given skeleton sequence. Recent studies have shown that exploring spatial and temporal features of the skeleton sequence is vital for this task. Nevertheless, how to effectively extract discriminative spatial and temporal features is still a challenging problem. In this paper, we propose a novel Attention Enhanced Graph Convolutional LSTM Network (AGC-LSTM) for human action recognition from skeleton data. The proposed AGC-LSTM can not only capture discriminative features in spatial configuration and temporal dynamics but also explore the co-occurrence relationship between spatial and temporal domains. We also present a temporal hierarchical architecture to increases temporal receptive fields of the top AGC-LSTM layer, which boosts the ability to learn the high-level semantic representation and significantly reduces the computation cost. Furthermore, to select discriminative spatial information, the attention mechanism is employed to enhance information of key joints in each AGC-LSTM layer. Experimental results on two datasets are provided: NTU RGB+D dataset and Northwestern-UCLA dataset. The comparison results demonstrate the effectiveness of our approach and show that our approach outperforms the state-of-the-art methods on both datasets.) <|cite_end|> <|cite_start|> (Reference: 2019 IEEE International Conference on Multimedia and Expo (ICME): ) <|cite_end|>.
Alternatively, Graph Convolutional Network (GCN) may perform spatio-temporal convolution in the neighborhood of each node <|cite_start|> (Reference: Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition: Dynamics of human body skeletons convey significant information for human action recognition. Conventional approaches for modeling skeletons usually rely on hand-crafted parts or traversal rules, thus resulting in limited expressive power and difficulties of generalization. In this work, we propose a novel model of dynamic skeletons called Spatial-Temporal Graph Convolutional Networks (ST-GCN), which moves beyond the limitations of previous methods by automatically learning both the spatial and temporal patterns from data. This formulation not only leads to greater expressive power but also stronger generalization capability. On two large datasets, Kinetics and NTU-RGBD, it achieves substantial improvements over mainstream methods.) <|cite_end|>.}
Spatial GCNs perform convolution within one or two hop distance of each node, \eg, spatio-temporal GCN model called ST-GCN <|cite_start|> (Reference: Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition: Dynamics of human body skeletons convey significant information for human action recognition. Conventional approaches for modeling skeletons usually rely on hand-crafted parts or traversal rules, thus resulting in limited expressive power and difficulties of generalization. In this work, we propose a novel model of dynamic skeletons called Spatial-Temporal Graph Convolutional Networks (ST-GCN), which moves beyond the limitations of previous methods by automatically learning both the spatial and temporal patterns from data. This formulation not only leads to greater expressive power but also stronger generalization capability. On two large datasets, Kinetics and NTU-RGBD, it achieves substantial improvements over mainstream methods.) <|cite_end|>models spatio-temporal vicinity of each 3D body joint.
As ST-GCN applies convolution along structural connections (links between body joints), structurally distant joints, which may cover key patterns of actions, are largely ignored.
ST-GCN captures ever larger neighborhoods as layers are added but suffers from oversmoothing that can be mitigated by linear GCNs <|cite_start|> (Reference: Simple spectral graph convolution: neighborhoods of various sizes. Moreover, we show that our design incorporates larger neighborhoods compared to SGC thus coping better with oversmoothing. We explain that limiting over-dominance of the largest neighborhoods in the aggregation step is a desired approach to limit oversmoothing while preserving large context of each node. We also show that in spectral analysis that S 2 GC is a trade-off between the low-and high-pass filters which leads to capturing the global and local contexts of each node. Moreover, we show how S 2 GC and APPNP (Klicpera et al., 2019a) are related and explain why S 2 GC captures a range of neighborhoods better than APPNP. Our experimental results include node clustering, unsupervised and semi-supervised node classifi-cation, node property prediction and supervised text classification. We show that S 2 GC is highly competitive often significantly outperforming state-of-the-art methods) <|cite_end|> <|cite_start|> (Reference: Contrastive Laplacian Eigenmaps: Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. In this paper, we extend the celebrated Laplacian Eigenmaps with contrastive learning, and call them COntrastive Laplacian EigenmapS (COLES). Starting from a GAN-inspired contrastive formulation, we show that the Jensen-Shannon divergence underlying many contrastive graph embedding models fails under disjoint positive and negative distributions, which may naturally emerge during sampling in the contrastive setting. In contrast, we demonstrate analytically that COLES essentially minimizes a surrogate of Wasserstein distance, which is known to cope well under disjoint distributions. Moreover, we show that the loss of COLES belongs to the family of so-called block-contrastive losses, previously shown to be superior compared to pair-wise losses typically used by contrastive methods. We show on popular benchmarks/backbones that COLES offers favourable accuracy/scalability compared to DeepWalk, GCN, Graph2Gauss, DGI and GRACE baselines.) <|cite_end|> <|cite_start|> (Reference: Generalized Laplacian Eigenmaps: Graph contrastive learning attracts/disperses node representations for similar/dissimilar node pairs under some notion of similarity. It may be combined with a low-dimensional embedding of nodes to preserve intrinsic and structural properties of a graph. COLES, a recent graph contrastive method combines traditional graph embedding and negative sampling into one framework. COLES in fact minimizes the trace difference between the within-class scatter matrix encapsulating the graph connectivity and the total scatter matrix encapsulating negative sampling. In this paper, we propose a more essential framework for graph embedding, called Generalized Laplacian EigeNmaps (GLEN), which learns a graph representation by maximizing the rank difference between the total scatter matrix and the within-class scatter matrix, resulting in the minimum class separation guarantee. However, the rank difference minimization is an NP-hard problem. Thus, we replace the trace difference that corresponds to the difference of nuclear norms by the difference of LogDet expressions, which we argue is a more accurate surrogate for the NP-hard rank difference than the trace difference. While enjoying a lesser computational cost, the difference of LogDet terms is lower-bounded by the Affine-invariant Riemannian metric (AIRM) and upper-bounded by AIRM scaled by the factor of √ m . We show on popular benchmarks/backbones that GLEN offers favourable accuracy/scalability compared to state-of-the-art baselines.) <|cite_end|>.
Human actions are associated with interaction groups of skeletal joints, \eg, wrist alone, head-wrist, head-wrist-ankles, \etc. The impact of these groups of joints on each action differs, and the degree of influence of each joint should be learned. Accordingly, designing a better model for skeleton data is vital given the topology of skeleton graph is suboptimal.
While GCN can be applied to a fully-connected graph (\ie, 3D body joints as densely connected graph nodes), Higher-order Transformer (HoT) <|cite_start|> (Reference: Transformers Generalize DeepSets and Can be Extended to Graphs and Hypergraphs: We present a generalization of Transformers to any-order permutation invariant data (sets, graphs, and hypergraphs). We begin by observing that Transformers generalize DeepSets, or first-order (set-input) permutation invariant MLPs. Then, based on recently characterized higher-order invariant MLPs, we extend the concept of self-attention to higher orders and propose higher-order Transformers for order-$k$ data ($k=2$ for graphs and $k>2$ for hypergraphs). Unfortunately, higher-order Transformers turn out to have prohibitive complexity $\mathcal{O}(n^{2k})$ to the number of input nodes $n$. To address this problem, we present sparse higher-order Transformers that have quadratic complexity to the number of input hyperedges, and further adopt the kernel attention approach to reduce the complexity to linear. In particular, we show that the sparse second-order Transformers with kernel attention are theoretically more expressive than message passing operations while having an asymptotically identical complexity. Our models achieve significant performance improvement over invariant MLPs and message-passing graph neural networks in large-scale graph regression and set-to-(hyper)graph prediction tasks. Our implementation is available at https://github.com/jw9730/hot.) <|cite_end|>has been proven more efficient.
Thus, we propose to use hypergraphs with hyper-edges of order $1$ to $r$ to effectively represent skeleton data for action recognition.
Compared to GCNs,
our encoder contains an MLP followed by three HoT branches that encode first-, second- and higher-order hyper-edges, \ie, set of body joints, edges between pairs of nodes, hyper-edges between triplets of nodes, \etc. Each branch has its own learnable parameters, and processes temporal blocks\footnote{Each temporal block enjoys a locally factored out (removed) temporal mode, which makes each block representation compact.} one-by-one.
We notice that (i) the number of hyper-edges of $J$ joints grows rapidly with order $r$, \ie, $\binom{J}{i}$ for $i=1,\cdots,r$, embeddings of the highest order
dominate lower orders in terms of volume if such embeddings are merely concatenated, and (ii) long-range temporal dependencies of feature maps are insufficiently explored, as sequences are split into $\tau$ temporal blocks for computational tractability.
Merely concatenating outputs of HoT branches of orders $1$ to $r$, and across $\tau$ blocks, is sub-optimal.
Thus, our \lei{Multi-order Multi-mode Transformer (3Mformer)} with two modules whose order can be exchanged, realizes a variation of coupled-mode tokens based on `channel-temporal block', `\lei{order-}channel-body joint', `channel-hyper-edge (any order)' and `channel-only' pairs. As HoT operates block-by-block, `channel-temporal block' tokens and weighted hyper-edge aggregation in Multi-order Pooling (MP) help combine information flow block-wise. Various {coupled-mode} tokens help improve results further due to different focus of each attention mechanism. As the {block-temporal} mode needs to be aggregated (number of blocks varies across sequences), Temporal block Pooling (TP) can use rank pooling <|cite_start|> (Reference: Rank Pooling for Action Recognition: We propose a function-based temporal pooling method that captures the latent structure of the video sequence data - e.g. how frame-level features evolve over time in a video. We show how the parameters of a function that has been fit to the video data can serve as a robust new video representation. As a specific example, we learn a pooling function via ranking machines. By learning to rank the frame-level features of a video in chronological order, we obtain a new representation that captures the video-wide temporal dynamics of a video, suitable for action recognition. Other than ranking functions, we explore different parametric models that could also explain the temporal changes in videos. The proposed functional pooling methods, and rank pooling in particular, is easy to interpret and implement, fast to compute and effective in recognizing a wide variety of actions. We evaluate our method on various benchmarks for generic action, fine-grained action and gesture recognition. Results show that rank pooling brings an absolute improvement of 7-10 average pooling baseline. At the same time, rank pooling is compatible with and complementary to several appearance and local motion based methods and features, such as improved trajectories and deep learning features.) <|cite_end|>, second-order <|cite_start|> (Reference: Second-order Democratic Aggregation: Aggregated second-order features extracted from deep convolutional networks have been shown to be effective for texture generation, fine-grained recognition, material classification, and scene understanding. In this paper, we study a class of orderless aggregation functions designed to minimize interference or equalize contributions in the context of second-order features and we show that they can be computed just as efficiently as their first-order counterparts and they have favorable properties over aggregation by summation. Another line of work has shown that matrix power normalization after aggregation can significantly improve the generalization of second-order representations. We show that matrix power normalization implicitly equalizes contributions during aggregation thus establishing a connection between matrix normalization techniques and prior work on minimizing interference. Based on the analysis we present {\gamma}-democratic aggregators that interpolate between sum ({\gamma}=1) and democratic pooling ({\gamma}=0) outperforming both on several classification tasks. Moreover, unlike power normalization, the {\gamma}-democratic aggregations can be computed in a low dimensional space by sketching that allows the use of very high-dimensional second-order features. This results in a state-of-the-art performance on several datasets.) <|cite_end|> <|cite_start|> (Reference: Global Second-order Pooling Convolutional Networks: Deep Convolutional Networks (ConvNets) are fundamental to, besides large-scale visual recognition, a lot of vision tasks. As the primary goal of the ConvNets is to characterize complex boundaries of thousands of classes in a high-dimensional space, it is critical to learn higher-order representations for enhancing non-linear modeling capability. Recently, Global Second-order Pooling (GSoP), plugged at the end of networks, has attracted increasing attentions, achieving much better performance than classical, first-order networks in a variety of vision tasks. However, how to effectively introduce higher-order representation in earlier layers for improving non-linear capability of ConvNets is still an open problem. In this paper, we propose a novel network model introducing GSoP across from lower to higher layers for exploiting holistic image information throughout a network. Given an input 3D tensor outputted by some previous convolutional layer, we perform GSoP to obtain a covariance matrix which, after nonlinear transformation, is used for tensor scaling along channel dimension. Similarly, we can perform GSoP along spatial dimension for tensor scaling as well. In this way, we can make full use of the second-order statistics of the holistic image throughout a network. The proposed networks are thoroughly evaluated on large-scale ImageNet-1K, and experiments have shown that they outperformed non-trivially the counterparts while achieving state-of-the-art results.) <|cite_end|> <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> <|cite_start|> (Reference: Attentional Pooling for Action Recognition: We introduce a simple yet surprisingly powerful model to incorporate attention in action recognition and human object interaction tasks. Our proposed attention module can be trained with or without extra supervision, and gives a sizable boost in accuracy while keeping the network size and computational cost nearly the same. It leads to significant improvements over state of the art base architecture on three standard action recognition benchmarks across still images and videos, and establishes new state of the art on MPII dataset with 12.5% relative improvement. We also perform an extensive analysis of our attention module both empirically and analytically. In terms of the latter, we introduce a novel derivation of bottom-up and top-down attention as low-rank approximations of bilinear pooling methods (typically used for fine-grained classification). From this perspective, our attention formulation suggests a novel characterization of action recognition as a fine-grained recognition problem.) <|cite_end|> <|cite_start|> (Reference: Few-Shot Object Detection by Second-Order Pooling: ) <|cite_end|> <|cite_start|> (Reference: Power Normalizations in Fine-grained Image, Few-shot Image and Graph Classification: Power Normalizations (PN) are useful non-linear operators which tackle feature imbalances in classification problems. We study PNs in the deep learning setup via a novel PN layer pooling feature maps. Our layer combines the feature vectors and their respective spatial locations in the feature maps produced by the last convolutional layer of CNN into a positive definite matrix with second-order statistics to which PN operators are applied, forming so-called Second-order Pooling (SOP). As the main goal of this paper is to study Power Normalizations, we investigate the role and meaning of MaxExp and Gamma, two popular PN functions. To this end, we provide probabilistic interpretations of such element-wise operators and discover surrogates with well-behaved derivatives for end-to-end training. Furthermore, we look at the spectral applicability of MaxExp and Gamma by studying Spectral Power Normalizations (SPN). We show that SPN on the autocorrelation/covariance matrix and the Heat Diffusion Process (HDP) on a graph Laplacian matrix are closely related, thus sharing their properties. Such a finding leads us to the culmination of our work, a fast spectral MaxExp which is a variant of HDP for covariances/autocorrelation matrices. We evaluate our ideas on fine-grained recognition, scene recognition, and material classification, as well as in few-shot learning and graph classification.) <|cite_end|> <|cite_start|> (Reference: Learning Partial Correlation based Deep Visual Representation for Image Classification: Visual representation based on covariance matrix has demonstrates its efficacy for image classification by characterising the pairwise correlation of different channels in convolutional feature maps. However, pairwise correlation will become misleading once there is another channel correlating with both channels of interest, resulting in the ``confounding'' effect. For this case, ``partial correlation'' which removes the confounding effect shall be estimated instead. Nevertheless, reliably estimating partial correlation requires to solve a symmetric positive definite matrix optimisation, known as sparse inverse covariance estimation (SICE). How to incorporate this process into CNN remains an open issue. In this work, we formulate SICE as a novel structured layer of CNN. To ensure end-to-end trainability, we develop an iterative method to solve the above matrix optimisation during forward and backward propagation steps. Our work obtains a partial correlation based deep visual representation and mitigates the small sample problem often encountered by covariance matrix estimation in CNN. Computationally, our model can be effectively trained with GPU and works well with a large number of channels of advanced CNNs. Experiments show the efficacy and superior classification performance of our deep visual representation compared to covariance matrix based counterparts.) <|cite_end|>or higher-order pooling <|cite_start|> (Reference: 2017 IEEE Winter Conference on Applications of Computer Vision, WACV 2017, Santa Rosa, CA, USA, March 24-31, 2017: ) <|cite_end|> <|cite_start|> (Reference: High-order Tensor Pooling with Attention for Action Recognition: We aim at capturing high-order statistics of feature vectors formed by a neural network, and propose end-to-end second- and higher-order pooling to form a tensor descriptor. Tensor descriptors require a robust similarity measure due to low numbers of aggregated vectors and the burstiness phenomenon, when a given feature appears more/less frequently than statistically expected. The Heat Diffusion Process (HDP) on a graph Laplacian is closely related to the Eigenvalue Power Normalization (EPN) of the covariance/autocorrelation matrix, whose inverse forms a loopy graph Laplacian. We show that the HDP and the EPN play the same role, i.e., to boost or dampen the magnitude of the eigenspectrum thus preventing the burstiness. We equip higher-order tensors with EPN which acts as a spectral detector of higher-order occurrences to prevent burstiness. We also prove that for a tensor of order r built from d dimensional feature descriptors, such a detector gives the likelihood if at least one higher-order occurrence is 'projected' into one of binom(d,r) subspaces represented by the tensor; thus forming a tensor power normalization metric endowed with binom(d,r) such 'detectors'. For experimental contributions, we apply several second- and higher-order pooling variants to action recognition, provide previously not presented comparisons of such pooling variants, and show state-of-the-art results on HMDB-51, YUP++ and MPII Cooking Activities.) <|cite_end|> <|cite_start|> (Reference: Tensor Representations for Action Recognition: Human actions in video sequences are characterized by the complex interplay between spatial features and their temporal dynamics. In this paper, we propose novel tensor representations for compactly capturing such higher-order relationships between visual features for the task of action recognition. We propose two tensor-based feature representations, viz. (i) sequence compatibility kernel (SCK) and (ii) dynamics compatibility kernel (DCK). SCK builds on the spatio-temporal correlations between features, whereas DCK explicitly models the action dynamics of a sequence. We also explore generalization of SCK, coined SCK(+), that operates on subsequences to capture the local-global interplay of correlations, which can incorporate multi-modal inputs e.g., skeleton 3D body-joints and per-frame classifier scores obtained from deep learning models trained on videos. We introduce linearization of these kernels that lead to compact and fast descriptors. We provide experiments on (i) 3D skeleton action sequences, (ii) fine-grained video sequences, and (iii) standard non-fine-grained videos. As our final representations are tensors that capture higher-order relationships of features, they relate to co-occurrences for robust fine-grained recognition. We use higher-order tensors and so-called Eigenvalue Power Normalization (EPN) which have been long speculated to perform spectral detection of higher-order occurrences, thus detecting fine-grained relationships of features rather than merely count features in action sequences. We prove that a tensor of order r, built from Z* dimensional features, coupled with EPN indeed detects if at least one higher-order occurrence is `projected' into one of its binom(Z*,r) subspaces of dim. r represented by the tensor, thus forming a Tensor Power Normalization metric endowed with binom(Z*,r) such `detectors'.) <|cite_end|> <|cite_start|> (Reference: Kernelized Few-shot Object Detection with Efficient Integral Aggregation: We design a Kernelized Few-shot Object Detector by leveraging kernelized matrices computed over multiple proposal regions, which yield expressive non-linear representations whose model complexity is learned on the fly. Our pipeline contains several modules. An Encoding Network encodes support and query images. Our Kernelized Autocorrelation unit forms the linear, polynomial and RBF kernelized representations from features extracted within support regions of support images. These features are then cross-correlated against features of a query image to obtain attention weights, and generate query proposal regions via an Attention Region Proposal Net. As the query proposal regions are many, each described by the linear, polynomial and RBF kernelized matrices, their formation is costly but that cost is reduced by our proposed Integral Region-of-Interest Aggregation unit. Finally, the Multi-head Relation Net combines all kernelized (second-order) representations with the first-order feature maps to learn support-query class relations and locations. We outperform the state of the art on novel classes by 3.8%, 5.4% and 5.7% mAP on PASCAL VOC 2007, FSOD, and COCO.) <|cite_end|> <|cite_start|> (Reference: Time-rEversed diffusioN tEnsor Transformer: A new TENET of Few-Shot Object Detection: In this paper, we tackle the challenging problem of Few-shot Object Detection. Existing FSOD pipelines (i) use average-pooled representations that result in information loss; and/or (ii) discard position information that can help detect object instances. Consequently, such pipelines are sensitive to large intra-class appearance and geometric variations between support and query images. To address these drawbacks, we propose a Time-rEversed diffusioN tEnsor Transformer (TENET), which i) forms high-order tensor representations that capture multi-way feature occurrences that are highly discriminative, and ii) uses a transformer that dynamically extracts correlations between the query image and the entire support set, instead of a single average-pooled support embedding. We also propose a Transformer Relation Head (TRH), equipped with higher-order representations, which encodes correlations between query regions and the entire support set, while being sensitive to the positional variability of object instances. Our model achieves state-of-the-art results on PASCAL VOC, FSOD, and COCO.) <|cite_end|>.
\vspace{0.2cm}
In summary, our main contributions are listed as follows:$\!\!\!$
\renewcommand{\labelenumi}{\roman{enumi}.}
\begin{enumerate}[leftmargin=0.6cm]
\item We model the skeleton data as hypergraph of orders $1$ to $r$ (set, graph and/or hypergraph), where human body joints serve as nodes. Higher-order Transformer embeddings of such formed hyper-edges represent various groups of 3D body joints and capture various higher-order dynamics important for action recognition.
\item As HoT embeddings represent individual hyper-edge order and block, we introduce a novel Multi-order Multi-mode Transformer (3Mformer) with two modules, Multi-order Pooling and Temporal block Pooling. Their goal is to form coupled-mode tokens such as `channel-temporal block', `order-channel-body joint', `channel-hyper-edge (any order)' and `channel-only', and perform weighted hyper-edge aggregation and temporal block aggregation.
\end{enumerate}
Our 3Mformer outperforms other GCN- and hypergraph-based models on NTU-60, NTU-120, Kinetics-Skeleton and Northwestern-UCLA by a large margin.
\begin{figure*}[t]
\centering\includegraphics[width=1\linewidth]{imgs/pipe4lei3.pdf}
\caption{Pipeline overview. Each sequence is split into $\tau$ temporal blocks $\mathbf{B}_1,\cdots,\mathbf{B}_\tau$. Subsequently, each block is embedded by a simple MLP into $\mathbf{X}_1,\cdots,\mathbf{X}_\tau$, which are passed to Higher-order Transformers (\lei{HoT} ($n\!=\!1,\cdots,r$)) in order to obtain feature tensors $\mathbf{\Phi}_1,\cdots,\mathbf{\Phi}_\tau$. These tensors are subsequently concatenated by $\odot$ along the hyper-edge mode into a multi-order feature tensor $\boldsymbol{\mathcal{M}}$. The final step is a \lei{Multi-order Multi-mode Transformer (3Mformer} from Section \ref{sec:appr}), which contains two \lei{complementary} branches, MP$\rightarrow$TP and TP$\rightarrow$MP, whose outputs are concatenated by $\odot$ and passed to the classifier. MP and TP perform the \lei{\leicr{Coupled-mode}} Self-Attention (\leicr{CmSA}) with the so-called \leicr{coupled-mode} tokens, based
on `channel-temporal block’, `\lei{order-}channel-body joint’, `channel-hyper-edge’ and `channel-only’ pairs. To this end, MP contains also weighted pooling along hyper-edge mode by learnable matrix $\mathbf{H}$ (and $\mathbf{H}'$ in another branch). TP contains also \lei{block-temporal} pooling denoted by $g(\cdot)$ whose role is to capture \lei{block-temporal} order with average, maximum, rank pooling, \etc. In our experiments we show that such designed MP and TP are able to efficiently process hyper-edge feature representations from HoT branches. Appendix \ref{app:3mf} shows full visualization of our 3Mformer.}
\label{fig:pipeline}
\end{figure*}
Related Work
Below we describe popular action recognition models for skeletal data.
\vspace{0.1cm}
\noindent{\bf Graph-based models}. Popular GCN-based models include the Attention enhanced Graph Convolutional LSTM network (AGC-LSTM) <|cite_start|> (Reference: An Attention Enhanced Graph Convolutional LSTM Network for Skeleton-Based Action Recognition: Skeleton-based action recognition is an important task that requires the adequate understanding of movement characteristics of a human action from the given skeleton sequence. Recent studies have shown that exploring spatial and temporal features of the skeleton sequence is vital for this task. Nevertheless, how to effectively extract discriminative spatial and temporal features is still a challenging problem. In this paper, we propose a novel Attention Enhanced Graph Convolutional LSTM Network (AGC-LSTM) for human action recognition from skeleton data. The proposed AGC-LSTM can not only capture discriminative features in spatial configuration and temporal dynamics but also explore the co-occurrence relationship between spatial and temporal domains. We also present a temporal hierarchical architecture to increases temporal receptive fields of the top AGC-LSTM layer, which boosts the ability to learn the high-level semantic representation and significantly reduces the computation cost. Furthermore, to select discriminative spatial information, the attention mechanism is employed to enhance information of key joints in each AGC-LSTM layer. Experimental results on two datasets are provided: NTU RGB+D dataset and Northwestern-UCLA dataset. The comparison results demonstrate the effectiveness of our approach and show that our approach outperforms the state-of-the-art methods on both datasets.) <|cite_end|>, the Actional-Structural GCN (AS-GCN) <|cite_start|> (Reference: Actional-Structural Graph Convolutional Networks for Skeleton-based Action Recognition: Action recognition with skeleton data has recently attracted much attention in computer vision. Previous studies are mostly based on fixed skeleton graphs, only capturing local physical dependencies among joints, which may miss implicit joint correlations. To capture richer dependencies, we introduce an encoder-decoder structure, called A-link inference module, to capture action-specific latent dependencies, i.e. actional links, directly from actions. We also extend the existing skeleton graphs to represent higher-order dependencies, i.e. structural links. Combing the two types of links into a generalized skeleton graph, we further propose the actional-structural graph convolution network (AS-GCN), which stacks actional-structural graph convolution and temporal convolution as a basic building block, to learn both spatial and temporal features for action recognition. A future pose prediction head is added in parallel to the recognition head to help capture more detailed action patterns through self-supervision. We validate AS-GCN in action recognition using two skeleton data sets, NTU-RGB+D and Kinetics. The proposed AS-GCN achieves consistently large improvement compared to the state-of-the-art methods. As a side product, AS-GCN also shows promising results for future pose prediction.) <|cite_end|>, Dynamic Directed GCN (DDGCN) <|cite_start|> (Reference: DDGCN: A Dynamic Directed Graph Convolutional Network for Action Recognition: ) <|cite_end|>, Decoupling GCN with DropGraph module <|cite_start|> (Reference: Decoupling GCN with DropGraph Module for Skeleton-Based Action Recognition: ) <|cite_end|>, Shift-GCN <|cite_start|> (Reference: {Skeleton-based action recognition with shift graph convolutional network: Action recognition with skeleton data is attracting more attention in computer vision. Recently, graph convolutional networks (GCNs), which model the human body skeletons as spatiotemporal graphs, have obtained remarkable performance. However, the computational complexity of GCN-based methods are pretty heavy, typically over 15 GFLOPs for one action sample. Recent works even reach about 100 GFLOPs. Another shortcoming is that the receptive fields of both spatial graph and temporal graph are inflexible. Although some works enhance the expressiveness of spatial graph by introducing incremental adaptive modules, their performance is still limited by regular GCN structures. In this paper, we propose a novel shift graph convolutional network (Shift-GCN) to overcome both shortcomings. Instead of using heavy regular graph convolutions, our Shift-GCN is composed of novel shift graph operations and lightweight point-wise convolutions, where the shift graph operations provide flexible receptive fields for both spatial graph and temporal graph. On three datasets for skeleton-based action recognition, the proposed Shift-GCN notably exceeds the state-of-the-art methods with more than 10 times less computational complexity.) <|cite_end|>, Semantics-Guided Neural Networks (SGN) <|cite_start|> (Reference: Semantics-Guided Neural Networks for Efficient Skeleton-Based Human Action Recognition: Skeleton-based human action recognition has attracted great interest thanks to the easy accessibility of the human skeleton data. Recently, there is a trend of using very deep feedforward neural networks to model the 3D coordinates of joints without considering the computational efficiency. In this paper, we propose a simple yet effective semantics-guided neural network (SGN) for skeleton-based action recognition. We explicitly introduce the high level semantics of joints (joint type and frame index) into the network to enhance the feature representation capability. In addition, we exploit the relationship of joints hierarchically through two modules, i.e., a joint-level module for modeling the correlations of joints in the same frame and a framelevel module for modeling the dependencies of frames by taking the joints in the same frame as a whole. A strong baseline is proposed to facilitate the study of this field. With an order of magnitude smaller model size than most previous works, SGN achieves the state-of-the-art performance on the NTU60, NTU120, and SYSU datasets. The source code is available at https://github.com/microsoft/SGN.) <|cite_end|>, AdaSGN <|cite_start|> (Reference: AdaSGN: Adapting Joint Number and Model Size for Efficient Skeleton-Based Action Recognition: Existing methods for skeleton-based action recognition mainly focus on improving the recognition accuracy, whereas the efficiency of the model is rarely considered. Recently, there are some works trying to speed up the skeleton modeling by designing light-weight modules. However, in addition to the model size, the amount of the data involved in the calculation is also an important factor for the running speed, especially for the skeleton data where most of the joints are redundant or non-informative to identify a specific skeleton. Besides, previous works usually employ one fix-sized model for all the samples regardless of the difficulty of recognition, which wastes computations for easy samples. To address these limitations, a novel approach, called AdaSGN, is proposed in this paper, which can reduce the computational cost of the inference process by adaptively controlling the input number of the joints of the skeleton on-the-fly. Moreover, it can also adaptively select the optimal model size for each sample to achieve a better trade-off between accuracy and efficiency. We conduct extensive experiments on three challenging datasets, namely, NTU-60, NTU-120 and SHREC, to verify the superiority of the proposed approach, where AdaSGN achieves comparable or even higher performance with much lower GFLOPs compared with the baseline method.) <|cite_end|>, Context Aware GCN (CA-GCN) <|cite_start|> (Reference: {Context aware graph convolution for skeleton-based action recognition: Graph convolutional models have gained impressive successes on skeleton based human action recognition task. As graph convolution is a local operation, it cannot fully investigate non-local joints that could be vital to recognizing the action. For example, actions like typing and clapping request the cooperation of two hands, which are distant from each other in a human skeleton graph. Multiple graph convolutional layers thus tend to be stacked together to increase receptive field, which brings in computational inefficiency and optimization difficulty. But there is still no guarantee that distant joints (e.g. two hands) can be well integrated. In this paper, we propose a context aware graph convolutional network (CA-GCN). Besides the computation of localized graph convolution, CA-GCN considers a context term for each vertex by integrating information of all other vertices. Long range dependencies among joints are thus naturally integrated in context information, which then eliminates the need of stacking multiple layers to enlarge receptive field and greatly simplifies the network. Moreover, we further propose an advanced CA-GCN, in which asymmetric relevance measurement and higher level representation are utilized to compute context information for more flexibility and better performance. Besides the joint features, our CA-GCN could also be extended to handle graphs with edge (limb) features. Extensive experiments on two real-world datasets demonstrate the importance of context information and the effectiveness of the proposed CA-GCN in skeleton based action recognition.) <|cite_end|>, Channel-wise Topology Refinement Graph Convolution Network (CTR-GCN) <|cite_start|> (Reference: Channel-wise Topology Refinement Graph Convolution for Skeleton-Based Action Recognition: Graph convolutional networks (GCNs) have been widely used and achieved remarkable results in skeleton-based action recognition. In GCNs, graph topology dominates feature aggregation and therefore is the key to extracting representative features. In this work, we propose a novel Channel-wise Topology Refinement Graph Convolution (CTR-GC) to dynamically learn different topologies and effectively aggregate joint features in different channels for skeleton-based action recognition. The proposed CTR-GC models channel-wise topologies through learning a shared topology as a generic prior for all channels and refining it with channel-specific correlations for each channel. Our refinement method introduces few extra parameters and significantly reduces the difficulty of modeling channel-wise topologies. Furthermore, via reformulating graph convolutions into a unified form, we find that CTR-GC relaxes strict constraints of graph convolutions, leading to stronger representation capability. Combining CTR-GC with temporal modeling modules, we develop a powerful graph convolutional network named CTR-GCN which notably outperforms state-of-the-art methods on the NTU RGB+D, NTU RGB+D 120, and NW-UCLA datasets.) <|cite_end|>and a family of Efficient GCN (EfficientGCN-Bx) <|cite_start|> (Reference: Constructing Stronger and Faster Baselines for Skeleton-based Action Recognition: One essential problem in skeleton-based action recognition is how to extract discriminative features over all skeleton joints. However, the complexity of the recent State-Of-The-Art (SOTA) models for this task tends to be exceedingly sophisticated and over-parameterized. The low efficiency in model training and inference has increased the validation costs of model architectures in large-scale datasets. To address the above issue, recent advanced separable convolutional layers are embedded into an early fused Multiple Input Branches (MIB) network, constructing an efficient Graph Convolutional Network (GCN) baseline for skeleton-based action recognition. In addition, based on such the baseline, we design a compound scaling strategy to expand the model's width and depth synchronously, and eventually obtain a family of efficient GCN baselines with high accuracies and small amounts of trainable parameters, termed EfficientGCN-Bx, where "x" denotes the scaling coefficient. On two large-scale datasets, i.e., NTU RGB+D 60 and 120, the proposed EfficientGCN-B4 baseline outperforms other SOTA methods, e.g., achieving 91.7% accuracy on the cross-subject benchmark of NTU 60 dataset, while being 3.15x smaller and 3.21x faster than MS-G3D, which is one of the best SOTA methods. The source code in PyTorch version and the pretrained models are available at https://github.com/yfsong0709/EfficientGCNv1.) <|cite_end|>.
Although GCN-based models enjoy good performance, they have shortcomings, \eg, convolution and/or pooling are applied over \lei{one- or few-hop neighborhoods, \eg, ST-GCN <|cite_start|> (Reference: Spatial Temporal Graph Convolutional Networks for Skeleton-Based Action Recognition: Dynamics of human body skeletons convey significant information for human action recognition. Conventional approaches for modeling skeletons usually rely on hand-crafted parts or traversal rules, thus resulting in limited expressive power and difficulties of generalization. In this work, we propose a novel model of dynamic skeletons called Spatial-Temporal Graph Convolutional Networks (ST-GCN), which moves beyond the limitations of previous methods by automatically learning both the spatial and temporal patterns from data. This formulation not only leads to greater expressive power but also stronger generalization capability. On two large datasets, Kinetics and NTU-RGBD, it achieves substantial improvements over mainstream methods.) <|cite_end|>}, according to the human skeleton graph (body joints linked up according to connectivity of human body parts). Thus, indirect links between various 3D body joints such as hands and legs are ignored.
In contrast, our model is not restricted by the structure of typical human body skeletal graph. Instead, 3D body joints are nodes which form hyper-edges of orders $1$ to $r$.
\vspace{0.1cm}
\noindent{\bf Hypergraph-based models}. Pioneering work on capturing groups of nodes across time uses tensors <|cite_start|> (Reference: Tensor Representations for Action Recognition: Human actions in video sequences are characterized by the complex interplay between spatial features and their temporal dynamics. In this paper, we propose novel tensor representations for compactly capturing such higher-order relationships between visual features for the task of action recognition. We propose two tensor-based feature representations, viz. (i) sequence compatibility kernel (SCK) and (ii) dynamics compatibility kernel (DCK). SCK builds on the spatio-temporal correlations between features, whereas DCK explicitly models the action dynamics of a sequence. We also explore generalization of SCK, coined SCK(+), that operates on subsequences to capture the local-global interplay of correlations, which can incorporate multi-modal inputs e.g., skeleton 3D body-joints and per-frame classifier scores obtained from deep learning models trained on videos. We introduce linearization of these kernels that lead to compact and fast descriptors. We provide experiments on (i) 3D skeleton action sequences, (ii) fine-grained video sequences, and (iii) standard non-fine-grained videos. As our final representations are tensors that capture higher-order relationships of features, they relate to co-occurrences for robust fine-grained recognition. We use higher-order tensors and so-called Eigenvalue Power Normalization (EPN) which have been long speculated to perform spectral detection of higher-order occurrences, thus detecting fine-grained relationships of features rather than merely count features in action sequences. We prove that a tensor of order r, built from Z* dimensional features, coupled with EPN indeed detects if at least one higher-order occurrence is `projected' into one of its binom(Z*,r) subspaces of dim. r represented by the tensor, thus forming a Tensor Power Normalization metric endowed with binom(Z*,r) such `detectors'.) <|cite_end|>to represent the 3D human body joints to exploit the kinematic relations among the adjacent and non-adjacent joints. Representing the human body as a hypergraph is adopted in <|cite_start|> (Reference: {Semi-Dynamic Hypergraph Neural Network for 3D Pose Estimation: This paper proposes a novel Semi-Dynamic Hypergraph Neural Network (SD-HNN) to estimate 3D human pose from a single image. SD-HNN adopts hypergraph to represent the human body to effectively exploit the kinematic constrains among adjacent and non-adjacent joints. Specifically, a pose hypergraph in SD-HNN has two components. One is a static hypergraph constructed according to the conventional tree body structure. The other is the semi-dynamic hypergraph representing the dynamic kinematic constrains among different joints. These two hypergraphs are combined together to be trained in an end-to-end fashion. Unlike traditional Graph Convolutional Networks (GCNs) that are based on a fixed tree structure, the SD-HNN can deal with ambiguity in human pose estimation. Experimental results demonstrate that the proposed method achieves state-of-the-art performance both on the Human3.6M and MPI-INF-3DHP datasets.) <|cite_end|>via a semi-dynamic hypergraph neural network that captures richer information than GCN. A hypergraph GNN <|cite_start|> (Reference: Hypergraph Neural Network for Skeleton-Based Action Recognition: Recently, skeleton-based human action recognition has attracted a lot of research attention in the field of computer vision. Graph convolutional networks (GCNs), which model the human body skeletons as spatial-temporal graphs, have shown excellent results. However, the existing methods only focus on the local physical connection between the joints, and ignore the non-physical dependencies among joints. To address this issue, we propose a hypergraph neural network (Hyper-GNN) to capture both spatial-temporal information and high-order dependencies for skeleton-based action recognition. In particular, to overcome the influence of noise caused by unrelated joints, we design the Hyper-GNN to extract the local and global structure information via the hyperedge (i.e., non-physical connection) constructions. In addition, the hypergraph attention mechanism and improved residual module are induced to further obtain the discriminative feature representations. Finally, a three-stream Hyper-GNN fusion architecture is adopted in the whole framework for action recognition. The experimental results performed on two benchmark datasets demonstrate that our proposed method can achieve the best performance when compared with the state-of-the-art skeleton-based methods.) <|cite_end|> | [
"<|reference_start|> Uncertainty-DTW for Time Series and Sequences: Dynamic Time Warping (DTW) is used for matching pairs of sequences and celebrated in applications such as forecasting the evolution of time series, clustering time series or even matching sequence pairs in few-shot action recognition. The transportation plan of DTW contains a set of paths; each path matches frames between two sequences under a varying degree of time warping, to account for varying temporal intra-class dynamics of actions. However, as DTW is the smallest distance among all paths, it may be affected by the feature uncertainty which varies across time steps/frames. Thus, in this paper, we propose to model the so-called aleatoric uncertainty of a differentiable (soft) version of DTW. To this end, we model the heteroscedastic aleatoric uncertainty of each path by the product of likelihoods from Normal distributions, each capturing variance of pair of frames. (The path distance is the sum of base distances between features of pairs of frames of the path.) The Maximum Likelihood Estimation (MLE) applied to a path yields two terms: (i) a sum of Euclidean distances weighted by the variance inverse, and (ii) a sum of log-variance regularization terms. Thus, our uncertainty-DTW is the smallest weighted path distance among all paths, and the regularization term (penalty for the high uncertainty) is the aggregate of log-variances along the path. The distance and the regularization term can be used in various objectives. We showcase forecasting the evolution of time series, estimating the Fr\\'echet mean of time series, and supervised/unsupervised few-shot action recognition of the articulated human 3D body joints. <|reference_end|>",
"<|reference_start|> Few-Shot Object Detection by Second-Order Pooling: <|reference_end|>",
"<|reference_start|> Decoupling GCN with DropGraph Module for Skeleton-Based Action Recognition: <|reference_end|>",
"<|reference_start|> Hypergraph Neural Network for Skeleton-Based Action Recognition: Recently, skeleton-based human action recognition has attracted a lot of research attention in the field of computer vision. Graph convolutional networks (GCNs), which model the human body skeletons as spatial-temporal graphs, have shown excellent results. However, the existing methods only focus on the local physical connection between the joints, and ignore the non-physical dependencies among joints. To address this issue, we propose a hypergraph neural network (Hyper-GNN) to capture both spatial-temporal information and high-order dependencies for skeleton-based action recognition. In particular, to overcome the influence of noise caused by unrelated joints, we design the Hyper-GNN to extract the local and global structure information via the hyperedge (i.e., non-physical connection) constructions. In addition, the hypergraph attention mechanism and improved residual module are induced to further obtain the discriminative feature representations. Finally, a three-stream Hyper-GNN fusion architecture is adopted in the whole framework for action recognition. The experimental results performed on two benchmark datasets demonstrate that our proposed method can achieve the best performance when compared with the state-of-the-art skeleton-based methods. <|reference_end|>"
] | [
9,
38,
49,
59
] | {"<|multi_cite_1_1|>": "ss-1244037", "<|multi_cite_1_2|>": "arxiv-211746", "<|multi_cite_1_3|>": "arxiv-211214", "<|multi_cite_1_4|>": "ss-1280936", "<|multi_cite_1_5|>": "arxiv-312452", "<|multi_cite_1_6|>": "arxiv-373137", "<|multi_cite_1_7|>": "arxiv-243223", "<|multi_cite_1_8|>": "ss-915049", "<|multi_cite_1_9|>": "arxiv-338747", "<|multi_cite_1_10|>": "arxiv-458682", "<|multi_cite_1_11|>": "arxiv-458353", "<|multi_cite_2_1|>": "ss-1244037", "<|multi_cite_2_2|>": "arxiv-211746", "<|multi_cite_2_3|>": "arxiv-211214", "<|multi_cite_2_4|>": "ss-1280936", "<|multi_cite_2_5|>": "arxiv-243223", "<|multi_cite_2_6|>": "arxiv-373137", "<|multi_cite_3_1|>": "ss-1244037", "<|multi_cite_3_2|>": "arxiv-211214", "<|multi_cite_3_3|>": "arxiv-312452", "<|multi_cite_3_4|>": "ss-915049", "<|multi_cite_3_5|>": "arxiv-338747", "<|multi_cite_3_6|>": "arxiv-458682", "<|multi_cite_3_7|>": "arxiv-458353", "<|cite_4|>": "arxiv-110915", "<|multi_cite_5_1|>": "arxiv-192725", "<|multi_cite_5_2|>": "ss-1256262", "<|cite_6|>": "arxiv-146134", "<|cite_7|>": "arxiv-146134", "<|multi_cite_8_1|>": "ss-2475384", "<|multi_cite_8_2|>": "arxiv-392642", "<|multi_cite_8_3|>": "ss-2126473", "<|cite_9|>": "ss-1358775", "<|cite_10|>": "arxiv-88592", "<|multi_cite_11_1|>": "arxiv-169962", "<|multi_cite_11_2|>": "arxiv-182371", "<|multi_cite_11_3|>": "ss-832115", "<|multi_cite_11_4|>": "arxiv-139155", "<|multi_cite_11_5|>": "ss-1162517", "<|multi_cite_11_6|>": "arxiv-312290", "<|multi_cite_11_7|>": "arxiv-499183", "<|multi_cite_12_1|>": "ss-818703", "<|multi_cite_12_2|>": "arxiv-373137", "<|multi_cite_12_3|>": "arxiv-312452", "<|multi_cite_12_4|>": "ss-2341334", "<|multi_cite_12_5|>": "arxiv-458380", "<|cite_13|>": "arxiv-192725", "<|cite_14|>": "arxiv-201946", "<|cite_15|>": "ss-1198360", "<|cite_16|>": "ss-743677", "<|cite_17|>": "ss-743676", "<|cite_18|>": "arxiv-197786", "<|cite_19|>": "arxiv-328963", "<|cite_20|>": "ss-743678", "<|cite_21|>": "arxiv-357201", "<|cite_22|>": "arxiv-351644", "<|cite_23|>": "arxiv-146134", "<|cite_24|>": "arxiv-312452", "<|cite_25|>": "ss-743681", "<|cite_26|>": "ss-1551607", "<|cite_27|>": "arxiv-384924", "<|multi_cite_28_1|>": "ss-1849690", "<|multi_cite_28_2|>": "arxiv-466860", "<|cite_30|>": "arxiv-387409", "<|cite_31|>": "ss-782680", "<|cite_32|>": "arxiv-388134", "<|cite_33|>": "arxiv-407872", "<|cite_34|>": "ss-1358775", "<|multi_cite_35_1|>": "ss-832115", "<|multi_cite_35_2|>": "arxiv-298443", "<|cite_36|>": "ss-916580", "<|cite_37|>": "arxiv-410252", "<|cite_38|>": "arxiv-330340", "<|cite_39|>": "ss-1362330", "<|multi_cite_40_1|>": "arxiv-138577", "<|multi_cite_40_2|>": "ss-1540754", "<|cite_41|>": "ss-1256387"} |
2207.01769 | <|paper_start|> Title: SESS: Saliency Enhancing with Scaling and Sliding
Abstract: SESS: Saliency Enhancing with Scaling and Sliding: High-quality saliency maps are essential in several machine learning application areas including explainable AI and weakly supervised object detection and segmentation. Many techniques have been developed to generate better saliency using neural networks. However, they are often limited to specific saliency visualisation methods or saliency issues. We propose a novel saliency enhancing approach called SESS (Saliency Enhancing with Scaling and Sliding). It is a method and model agnostic extension to existing saliency map generation methods. With SESS, existing saliency approaches become robust to scale variance, multiple occurrences of target objects, presence of distractors and generate less noisy and more discriminative saliency maps. SESS improves saliency by fusing saliency maps extracted from multiple patches at different scales from different areas, and combines these individual maps using a novel fusion scheme that incorporates channel-wise weights and spatial weighted average. To improve efficiency, we introduce a pre-filtering step that can exclude uninformative saliency maps to improve efficiency while still enhancing overall results. We evaluate SESS on object recognition and detection benchmarks where it achieves significant improvement. The code is released publicly to enable researchers to verify performance and further development. Code is available at: https://github.com/neouyghur/SESS
Introduction
\label{sec:intro}
Approaches that generate saliency or importance maps based on the decision of deep neural networks (DNNs) are critical in several machine learning application areas including explainable AI and weakly supervised object detection and semantic segmentation. High-quality saliency maps increase the understanding and interpretability of a DNN's decision-making process, and can increase the accuracy of segmentation and detection results.
Since the development of DNNs, numerous approaches have been proposed to efficiently produce high-quality saliency maps. However, most methods have limited transferability and versatility. Existing methods are designed for DNN models with specific structures (i.e. a global average pooling layer), for certain types of visualisation (for details refer to Sec. \ref{sec:lit}), or to address a specific limitation. For instance, CAM <|cite_start|> (Reference: Learning Deep Features for Discriminative Localization: In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1% top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2% top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them) <|cite_end|> requires a network with global average pooling. Guided backpropagation (Guided-BP) <|cite_start|> (Reference: Striving for Simplicity: The All Convolutional Net: Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the "deconvolution approach" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.) <|cite_end|> is restricted to gradient-based approaches. Score-CAM <|cite_start|> (Reference: Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks: Recently, increasing attention has been drawn to the internal mechanisms of convolutional neural networks, and the reason why the network makes specific decisions. In this paper, we develop a novel post-hoc visual explanation method called Score-CAM based on class activation mapping. Unlike previous class activation mapping based approaches, Score-CAM gets rid of the dependence on gradients by obtaining the weight of each activation map through its forward passing score on target class, the final result is obtained by a linear combination of weights and activation maps. We demonstrate that Score-CAM achieves better visual performance and fairness for interpreting the decision making process. Our approach outperforms previous methods on both recognition and localization tasks, it also passes the sanity check. We also indicate its application as debugging tools. Official code has been released.) <|cite_end|> seeks to reduce the method's running-time, while SmoothGrad <|cite_start|> (Reference: SmoothGrad: removing noise by adding noise: Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.) <|cite_end|> aims to generate saliency maps with lower noise.
In this work, we propose Saliency Enhancing with Scaling and Sliding (SESS), a model and method agnostic black-box extension to existing saliency visualisation approaches. SESS is only applied to the input and output spaces, and thus does not need to access the internal structure and features of DNNs, and is not sensitive to the design of the base saliency method. It also addresses multiple limitations that plague existing saliency methods. For example, in Fig. \ref{fig:mov}, SESS shows improvements when applied to three different saliency methods. The saliency map extracted with the gradient-based approach (Guided-BP) is discriminative but noisy. Saliency maps generated by the activation-based method Grad-CAM <|cite_start|> (Reference: Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization: ) <|cite_end|> and perturbation-based method RISE <|cite_start|> (Reference: RISE: Randomized Input Sampling for Explanation of Black-box Models: Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on black-box models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches. Project page: http://cs-people.bu.edu/vpetsiuk/rise/) <|cite_end|> generate smooth saliency maps, but lack detail around the target object and fail to precisely separate the target object from the scene. With SESS, the results of all three methods become less noisy and a more discriminative boundary around the target is obtained.
\begin{figure*}[!th]
\centering
\includegraphics[width=\linewidth]{images/fig1/movtivation}
\caption{Example results of three well-known deep neural network visualisation methods with and without SESS. Each of these methods represents one type of saliency map extraction technique. With SESS, all methods generate less noisy and more discriminative saliency maps. The results are extracted with ResNet50, and layer4 is used for Grad-CAM. Target ImageNet class ID is 444 (bicycle-built-for-two).}
\label{fig:mov}
\end{figure*}
SESS addresses the following limitations of existing approaches:
\begin{itemize}
\item {\bf{Weak scale invariance:}} Several studies claim that generated saliency maps are inconsistent when there are scale differences <|cite_start|> (Reference: Puzzle-CAM: Improved localization via matching partial and full features: Weakly-supervised semantic segmentation (WSSS) is introduced to narrow the gap for semantic segmentation performance from pixel-level supervision to image-level supervision. Most advanced approaches are based on class activation maps (CAMs) to generate pseudo-labels to train the segmentation network. The main limitation of WSSS is that the process of generating pseudo-labels from CAMs that use an image classifier is mainly focused on the most discriminative parts of the objects. To address this issue, we propose Puzzle-CAM, a process that minimizes differences between the features from separate patches and the whole image. Our method consists of a puzzle module and two regularization terms to discover the most integrated region in an object. Puzzle-CAM can activate the overall region of an object using image-level supervision without requiring extra parameters. % In experiments, Puzzle-CAM outperformed previous state-of-the-art methods using the same labels for supervision on the PASCAL VOC 2012 test dataset. In experiments, Puzzle-CAM outperformed previous state-of-the-art methods using the same labels for supervision on the PASCAL VOC 2012 dataset. Code associated with our experiments is available at https://github.com/OFRIN/PuzzleCAM.) <|cite_end|> <|cite_start|> (Reference: Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation: Image-level weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years. Most of advanced solutions exploit class activation map (CAM). However, CAMs can hardly serve as the object mask due to the gap between full and weak supervisions. In this paper, we propose a self-supervised equivariant attention mechanism (SEAM) to discover additional supervision and narrow the gap. Our method is based on the observation that equivariance is an implicit constraint in fully supervised semantic segmentation, whose pixel-level labels take the same spatial transformation as the input images during data augmentation. However, this constraint is lost on the CAMs trained by image-level supervision. Therefore, we propose consistency regularization on predicted CAMs from various transformed images to provide self-supervision for network learning. Moreover, we propose a pixel correlation module (PCM), which exploits context appearance information and refines the prediction of current pixel by its similar neighbors, leading to further improvement on CAMs consistency. Extensive experiments on PASCAL VOC 2012 dataset demonstrate our method outperforms state-of-the-art methods using the same level of supervision. The code is released online.) <|cite_end|>, and we also observe that generated saliency maps are less discriminative when the target objects are comparatively small (See Fig. \ref{fig:mov} and Fig. \ref{fig:qual}).
\item {\bf{Inability to detect multiple occurrences:}} Some deep visualisation methods (i.e., Grad-CAM) fail to capture multiple occurrences of the same object in a scene <|cite_start|> (Reference: {Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks: Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision based problems. However, deep models are perceived as "black box" methods considering the lack of understanding of their internal functioning. There has been a significant recent interest to develop explainable deep learning models, and this paper is an effort in this direction. Building on a recently proposed method called Grad-CAM, we propose Grad-CAM++ to provide better visual explanations of CNN model predictions (when compared to Grad-CAM), in terms of better localization of objects as well as explaining occurrences of multiple objects of a class in a single image. We provide a mathematical explanation for the proposed method, Grad-CAM++, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the class label under consideration. Our extensive experiments and evaluations, both subjective and objective, on standard datasets showed that Grad-CAM++ indeed provides better visual explanations for a given CNN architecture when compared to Grad-CAM.) <|cite_end|> <|cite_start|> (Reference: Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models: Gaining insight into how deep convolutional neural network models perform image classification and how to explain their outputs have been a concern to computer vision researchers and decision makers. These deep models are often referred to as black box due to low comprehension of their internal workings. As an effort to developing explainable deep learning models, several methods have been proposed such as finding gradients of class output with respect to input image (sensitivity maps), class activation map (CAM), and Gradient based Class Activation Maps (Grad-CAM). These methods under perform when localizing multiple occurrences of the same class and do not work for all CNNs. In addition, Grad-CAM does not capture the entire object in completeness when used on single object images, this affect performance on recognition tasks. With the intention to create an enhanced visual explanation in terms of visual sharpness, object localization and explaining multiple occurrences of objects in a single image, we present Smooth Grad-CAM++ \footnote{Simple demo: http://35.238.22.135:5000/}, a technique that combines methods from two other recent techniques---SMOOTHGRAD and Grad-CAM++. Our Smooth Grad-CAM++ technique provides the capability of either visualizing a layer, subset of feature maps, or subset of neurons within a feature map at each instance at the inference level (model prediction process). After experimenting with few images, Smooth Grad-CAM++ produced more visually sharp maps with better localization of objects in the given input images when compared with other methods.) <|cite_end|>. (See Fig. \ref{fig:qual}).
\item {\bf{Impacted by distractors:}} Extracted saliency maps frequently incorrectly highlight regions when distractors exist. This is especially true when the class of the distractor returns a high confidence score, or is correlated with the target class.
\item {\bf{Noisy results:}} Saliency maps extracted with gradient based visualisation approaches <|cite_start|> (Reference: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps: This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].) <|cite_end|> <|cite_start|> (Reference: Axiomatic Attribution for Deep Networks: We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms---Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better.) <|cite_end|> appear visually noisy as shown in Fig \ref{fig:mov}.
\item {\bf{Less discriminative results:}} Activation based approaches (e.g., Grad-CAM) tend to be less discriminative, often highlighting large regions around the target such that background regions are often incorrectly captured as being salient.
\item {\bf{Fixed input size requirements:}} Neural networks with fully-connected layers like VGG-16 <|cite_start|> (Reference: Very Deep Convolutional Networks for Large-Scale Image Recognition: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.) <|cite_end|> require a fixed input size. Moreover, models perform better when the input size at inference is the same as the input size during training. As such, most visualisation methods resize the input to a fixed size. This impacts the resolution and aspect ratio, and may cause poor visualisation results <|cite_start|> (Reference: Self-supervised Equivariant Attention Mechanism for Weakly Supervised Semantic Segmentation: Image-level weakly supervised semantic segmentation is a challenging problem that has been deeply studied in recent years. Most of advanced solutions exploit class activation map (CAM). However, CAMs can hardly serve as the object mask due to the gap between full and weak supervisions. In this paper, we propose a self-supervised equivariant attention mechanism (SEAM) to discover additional supervision and narrow the gap. Our method is based on the observation that equivariance is an implicit constraint in fully supervised semantic segmentation, whose pixel-level labels take the same spatial transformation as the input images during data augmentation. However, this constraint is lost on the CAMs trained by image-level supervision. Therefore, we propose consistency regularization on predicted CAMs from various transformed images to provide self-supervision for network learning. Moreover, we propose a pixel correlation module (PCM), which exploits context appearance information and refines the prediction of current pixel by its similar neighbors, leading to further improvement on CAMs consistency. Extensive experiments on PASCAL VOC 2012 dataset demonstrate our method outperforms state-of-the-art methods using the same level of supervision. The code is released online.) <|cite_end|>.
\end{itemize}
SESS is a remedy for all of the limitations mentioned above. SESS extracts multiple equally sized (i.e., $224 \times 224$) patches from different regions of multiple scaled versions of an input image through resizing and sliding window operations. This step ensures that it is robust to scale variance and multiple occurrences.
Moreover, since each extracted patch is equal in size to the default input size of the model, SESS takes advantage of high-resolution inputs and respects the aspect ratio of the input image. Each extracted patch will contribute to the final saliency map, and the final saliency map is the fusion of the saliency maps extracted from patches. In the fusion step, SESS considers the confidence score of each patch, which serves to reduce noise and the impact of distractors while increasing SESS's discriminative power.
The increased performance of SESS is achieved countered a reduction in efficiency due to the use of multiple patches. Quantitative ablation studies show using more scales and denser sliding windows are beneficial, but increase computational costs. To reduce this cost, SESS uses a pre-filtering step that filters out background regions with low target class activation scores. Compared to saliency extraction, the inference step is efficient as it only requires a single forward pass and can exploit parallel computation and batch processing. As such, SESS obtains improved saliency masks with a small increase in run-time requirements. Ablation studies show that the proposed method outperforms its base saliency methods when using pre-filtering with a high pre-filter ratio. In a Pointing Game experiment <|cite_start|> (Reference: Top-down Neural Attention by Excitation Backprop: We aim to model the top-down attention of a Convolutional Neural Network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. In experiments, we demonstrate the accuracy and generalizability of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images.) <|cite_end|> all methods with SESS achieved significant improvements, despite of a pre-filter ratio of $99\%$ that excludes the majority of extracted patches from saliency generation.
We quantitatively and qualitatively evaluate SESS and conduct ablation studies regarding multiple scales, pre-filtering and fusion. All experimental results show that SESS is a useful and versatile extension to existing saliency methods.
To summarize, the main contributions of this work are as follows:
\begin{itemize}
\item We propose, SESS, a model and method agnostic black-box extension to existing saliency methods which is simple and efficient.
\item We demonstrate that SESS increases the visual quality of saliency maps, and improves their performance on object recognition and localisation tasks.
\end{itemize}
Related Work
\label{sec:lit}
{\noindent \bfseries{Deep Saliency Methods:}} Numerous deep neural network-based visualisation methods have been developed in recent years. Based on how the saliency map is extracted, they can be broadly categorised into three groups: gradient-based <|cite_start|> (Reference: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps: This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].) <|cite_end|> <|cite_start|> (Reference: SmoothGrad: removing noise by adding noise: Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.) <|cite_end|> <|cite_start|> (Reference: Axiomatic Attribution for Deep Networks: We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms---Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better.) <|cite_end|>, class activation-based <|cite_start|> (Reference: Learning Deep Features for Discriminative Localization: In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1% top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2% top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them) <|cite_end|> <|cite_start|> (Reference: Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization: ) <|cite_end|> <|cite_start|> (Reference: Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks: In this paper, we propose an efficient saliency map generation method, called Group score-weighted Class Activation Mapping (Group-CAM), which adopts the "split-transform-merge" strategy to generate saliency maps. Specifically, for an input image, the class activations are firstly split into groups. In each group, the sub-activations are summed and de-noised as an initial mask. After that, the initial masks are transformed with meaningful perturbations and then applied to preserve sub-pixels of the input (i.e., masked inputs), which are then fed into the network to calculate the confidence scores. Finally, the initial masks are weighted summed to form the final saliency map, where the weights are confidence scores produced by the masked inputs. Group-CAM is efficient yet effective, which only requires dozens of queries to the network while producing target-related saliency maps. As a result, Group-CAM can be served as an effective data augment trick for fine-tuning the networks. We comprehensively evaluate the performance of Group-CAM on common-used benchmarks, including deletion and insertion tests on ImageNet-1k, and pointing game tests on COCO2017. Extensive experimental results demonstrate that Group-CAM achieves better visual performance than the current state-of-the-art explanation approaches. The code is available at https://github.com/wofmanaf/Group-CAM.) <|cite_end|> <|cite_start|> (Reference: Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks: Recently, increasing attention has been drawn to the internal mechanisms of convolutional neural networks, and the reason why the network makes specific decisions. In this paper, we develop a novel post-hoc visual explanation method called Score-CAM based on class activation mapping. Unlike previous class activation mapping based approaches, Score-CAM gets rid of the dependence on gradients by obtaining the weight of each activation map through its forward passing score on target class, the final result is obtained by a linear combination of weights and activation maps. We demonstrate that Score-CAM achieves better visual performance and fairness for interpreting the decision making process. Our approach outperforms previous methods on both recognition and localization tasks, it also passes the sanity check. We also indicate its application as debugging tools. Official code has been released.) <|cite_end|>, and perturbation-based <|cite_start|> (Reference: Interpretable Explanations of Black Boxes by Meaningful Perturbation: As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks "look" in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural constraints. In this paper, we make two main contributions: First, we propose a general framework for learning different kinds of explanations for any black box algorithm. Second, we specialise the framework to find the part of an image most responsible for a classifier decision. Unlike previous works, our method is model-agnostic and testable because it is grounded in explicit and interpretable image perturbations.) <|cite_end|> <|cite_start|> (Reference: RISE: Randomized Input Sampling for Explanation of Black-box Models: Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on black-box models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches. Project page: http://cs-people.bu.edu/vpetsiuk/rise/) <|cite_end|> <|cite_start|> (Reference: Real time image saliency for black box classifiers: In this work we develop a fast saliency detection method that can be applied to any differentiable image classifier. We train a masking model to manipulate the scores of the classifier by masking salient parts of the input image. Our model generalises well to unseen images and requires a single forward pass to perform saliency detection, therefore suitable for use in real-time systems. We test our approach on CIFAR-10 and ImageNet datasets and show that the produced saliency maps are easily interpretable, sharp, and free of artifacts. We suggest a new metric for saliency and test our method on the ImageNet object localisation task. We achieve results outperforming other weakly supervised methods.) <|cite_end|> methods.
Gradient-based methods interpret the gradient with respect to the input image as a saliency map. They are efficient as they only require a single forward and backward propagation operation. However, saliency maps generated from raw gradients are visually noisy. Activation-based methods aggregate target class activations of a selected network layer to generate saliency maps. Compared with gradient-based methods, activation-based methods are less noisy, but are also less discriminative and will often incorrectly show strong activations in nearby background regions. Perturbation-based methods generate saliency maps by measuring the changes in the output when the input is perturbed. Perturbation-based methods are slow when compared to most gradient- and activation-based approaches, as they require multiple queries.
Methods can also be split into black-box and white-box according to whether they access the model architecture and parameters. Except for some perturbation-based methods <|cite_start|> (Reference: RISE: Randomized Input Sampling for Explanation of Black-box Models: Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on black-box models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches. Project page: http://cs-people.bu.edu/vpetsiuk/rise/) <|cite_end|> <|cite_start|> (Reference: Interpretable Explanations of Black Boxes by Meaningful Perturbation: As machine learning algorithms are increasingly applied to high impact yet high risk tasks, such as medical diagnosis or autonomous driving, it is critical that researchers can explain how such algorithms arrived at their predictions. In recent years, a number of image saliency methods have been developed to summarize where highly complex neural networks "look" in an image for evidence for their predictions. However, these techniques are limited by their heuristic nature and architectural constraints. In this paper, we make two main contributions: First, we propose a general framework for learning different kinds of explanations for any black box algorithm. Second, we specialise the framework to find the part of an image most responsible for a classifier decision. Unlike previous works, our method is model-agnostic and testable because it is grounded in explicit and interpretable image perturbations.) <|cite_end|>, saliency methods are all white-box in nature <|cite_start|> (Reference: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps: This paper addresses the visualisation of image classification models, learnt using deep Convolutional Networks (ConvNets). We consider two visualisation techniques, based on computing the gradient of the class score with respect to the input image. The first one generates an image, which maximises the class score [Erhan et al., 2009], thus visualising the notion of the class, captured by a ConvNet. The second technique computes a class saliency map, specific to a given image and class. We show that such maps can be employed for weakly supervised object segmentation using classification ConvNets. Finally, we establish the connection between the gradient-based ConvNet visualisation methods and deconvolutional networks [Zeiler et al., 2013].) <|cite_end|> <|cite_start|> (Reference: SmoothGrad: removing noise by adding noise: Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.) <|cite_end|> <|cite_start|> (Reference: Axiomatic Attribution for Deep Networks: We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms---Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better.) <|cite_end|> <|cite_start|> (Reference: Learning Deep Features for Discriminative Localization: In this work, we revisit the global average pooling layer proposed in [13], and shed light on how it explicitly enables the convolutional neural network to have remarkable localization ability despite being trained on image-level labels. While this technique was previously proposed as a means for regularizing training, we find that it actually builds a generic localizable deep representation that can be applied to a variety of tasks. Despite the apparent simplicity of global average pooling, we are able to achieve 37.1% top-5 error for object localization on ILSVRC 2014, which is remarkably close to the 34.2% top-5 error achieved by a fully supervised CNN approach. We demonstrate that our network is able to localize the discriminative image regions on a variety of tasks despite not being trained for them) <|cite_end|> <|cite_start|> (Reference: Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization: ) <|cite_end|>. White-box methods are usually more computationally efficient than black-box methods, and require a single forward and backward pass through the network. However, black-box methods are model agnostic, while white-box methods may only work on models with specific architectural features.
Approaches can also be one-shot or multi-shot in nature. One-shot approaches require a single forward and backward pass. Most gradient- and activation-based methods are single-shot. However, multi-shot variants are developed to obtain further improvements. For example, SmoothGrad <|cite_start|> (Reference: SmoothGrad: removing noise by adding noise: Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.) <|cite_end|> generates a sharper visualisations through multiple passes across noisy samples of the input image. Integrated Gradients (IG) <|cite_start|> (Reference: Axiomatic Attribution for Deep Networks: We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms---Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better.) <|cite_end|> addresses the ``gradient saturation" problem by taking the average of the gradients across multiple interpolated images. Augmented Grad-CAM <|cite_start|> (Reference: Augmented grad-cam: Heat-maps super resolution through augmentation: We present Augmented Grad-CAM, a general framework to provide a high-resolution visual explanation of CNN outputs. Our idea is to take advantage of image augmentation to aggregate multiple low-resolution heat-maps – in our experiments Grad-CAMs – computed from augmented copies of the same input image. We generate the high-resolution heat-map through super-resolution, and we formulate a general optimization problem based on Total Variation regularization. This problem is entirely solved on the GPU at inference time, together with image augmentation. Augmented Grad-CAM outperforms Grad-CAM in weakly supervised localization on Imagenet dataset, and provides more detailed heat-maps. Moreover, Augmented Grad-CAM turns to be particularly useful in monitoring the production of silicon wafers, where CNNs are employed to classify defective patterns on the wafer surface to detect harmful faults in the production line.) <|cite_end|> generates high-resolution saliency maps through multiple low-resolution saliency maps extracted from augmented variants of the input. Smooth Grad-CAM++ <|cite_start|> (Reference: Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models: Gaining insight into how deep convolutional neural network models perform image classification and how to explain their outputs have been a concern to computer vision researchers and decision makers. These deep models are often referred to as black box due to low comprehension of their internal workings. As an effort to developing explainable deep learning models, several methods have been proposed such as finding gradients of class output with respect to input image (sensitivity maps), class activation map (CAM), and Gradient based Class Activation Maps (Grad-CAM). These methods under perform when localizing multiple occurrences of the same class and do not work for all CNNs. In addition, Grad-CAM does not capture the entire object in completeness when used on single object images, this affect performance on recognition tasks. With the intention to create an enhanced visual explanation in terms of visual sharpness, object localization and explaining multiple occurrences of objects in a single image, we present Smooth Grad-CAM++ \footnote{Simple demo: http://35.238.22.135:5000/}, a technique that combines methods from two other recent techniques---SMOOTHGRAD and Grad-CAM++. Our Smooth Grad-CAM++ technique provides the capability of either visualizing a layer, subset of feature maps, or subset of neurons within a feature map at each instance at the inference level (model prediction process). After experimenting with few images, Smooth Grad-CAM++ produced more visually sharp maps with better localization of objects in the given input images when compared with other methods.) <|cite_end|> utilises the same idea proposed in SmoothGrad to generate sharper saliency maps. To the best of our knowledge, all perturbation methods are multi-shot in nature, as they require multiple queries to the model, each of which has a different perturbation.
Attempts have been made to make multi-shot approaches more efficient. Most such approaches seek to create perturbation masks in an efficient way. Dabkowski \etal <|cite_start|> (Reference: Real time image saliency for black box classifiers: In this work we develop a fast saliency detection method that can be applied to any differentiable image classifier. We train a masking model to manipulate the scores of the classifier by masking salient parts of the input image. Our model generalises well to unseen images and requires a single forward pass to perform saliency detection, therefore suitable for use in real-time systems. We test our approach on CIFAR-10 and ImageNet datasets and show that the produced saliency maps are easily interpretable, sharp, and free of artifacts. We suggest a new metric for saliency and test our method on the ImageNet object localisation task. We achieve results outperforming other weakly supervised methods.) <|cite_end|> generates a perturbation mask with a second neural network. Score-CAM <|cite_start|> (Reference: Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks: Recently, increasing attention has been drawn to the internal mechanisms of convolutional neural networks, and the reason why the network makes specific decisions. In this paper, we develop a novel post-hoc visual explanation method called Score-CAM based on class activation mapping. Unlike previous class activation mapping based approaches, Score-CAM gets rid of the dependence on gradients by obtaining the weight of each activation map through its forward passing score on target class, the final result is obtained by a linear combination of weights and activation maps. We demonstrate that Score-CAM achieves better visual performance and fairness for interpreting the decision making process. Our approach outperforms previous methods on both recognition and localization tasks, it also passes the sanity check. We also indicate its application as debugging tools. Official code has been released.) <|cite_end|> uses class activation maps (CAM) as masks; and Group-CAM <|cite_start|> (Reference: Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks: In this paper, we propose an efficient saliency map generation method, called Group score-weighted Class Activation Mapping (Group-CAM), which adopts the "split-transform-merge" strategy to generate saliency maps. Specifically, for an input image, the class activations are firstly split into groups. In each group, the sub-activations are summed and de-noised as an initial mask. After that, the initial masks are transformed with meaningful perturbations and then applied to preserve sub-pixels of the input (i.e., masked inputs), which are then fed into the network to calculate the confidence scores. Finally, the initial masks are weighted summed to form the final saliency map, where the weights are confidence scores produced by the masked inputs. Group-CAM is efficient yet effective, which only requires dozens of queries to the network while producing target-related saliency maps. As a result, Group-CAM can be served as an effective data augment trick for fine-tuning the networks. We comprehensively evaluate the performance of Group-CAM on common-used benchmarks, including deletion and insertion tests on ImageNet-1k, and pointing game tests on COCO2017. Extensive experimental results demonstrate that Group-CAM achieves better visual performance than the current state-of-the-art explanation approaches. The code is available at https://github.com/wofmanaf/Group-CAM.) <|cite_end|> follows a similar idea, but further reduces the number of masks through the merging of adjacent maps.
The proposed SESS is a method and model agnostic saliency extension. It can be a ``plug-and-play" extension for any saliency methods. However, like perturbation methods, it requires multiple queries. As such, for the sake of efficiency, single-shot and efficient multi-shot approaches are most appropriate for use with SESS.
{\noindent \bfseries{Enhancing Deep Saliency Visualisations:}} Many attempts have been made to generate discriminative and low-noise saliency maps. Early Gradient-based methods are visually noisy, and several methods have been proposed to address this. Guided-BP <|cite_start|> (Reference: Striving for Simplicity: The All Convolutional Net: Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the "deconvolution approach" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.) <|cite_end|> ignores zero gradients during backpropagation by using a RELU as the activation unit. SmoothGrad <|cite_start|> (Reference: Striving for Simplicity: The All Convolutional Net: Most modern convolutional neural networks (CNNs) used for object recognition are built using the same principles: Alternating convolution and max-pooling layers followed by a small number of fully connected layers. We re-evaluate the state of the art for object recognition from small images with convolutional networks, questioning the necessity of different components in the pipeline. We find that max-pooling can simply be replaced by a convolutional layer with increased stride without loss in accuracy on several image recognition benchmarks. Following this finding -- and building on other recent work for finding simple network structures -- we propose a new architecture that consists solely of convolutional layers and yields competitive or state of the art performance on several object recognition datasets (CIFAR-10, CIFAR-100, ImageNet). To analyze the network we introduce a new variant of the "deconvolution approach" for visualizing features learned by CNNs, which can be applied to a broader range of network structures than existing approaches.) <|cite_end|> takes the average gradient of noisy samples <|cite_start|> (Reference: SmoothGrad: removing noise by adding noise: Explaining the output of a deep network remains a challenge. In the case of an image classifier, one type of explanation is to identify pixels that strongly influence the final decision. A starting point for this strategy is the gradient of the class score function with respect to the input image. This gradient can be interpreted as a sensitivity map, and there are several techniques that elaborate on this basic idea. This paper makes two contributions: it introduces SmoothGrad, a simple method that can help visually sharpen gradient-based sensitivity maps, and it discusses lessons in the visualization of these maps. We publish the code for our experiments and a website with our results.) <|cite_end|> to generate cleaner results.
The first of the activation-based methods, CAM, is model sensitive. It requires the model apply a global average pooling over convolutional feature map channels immediately prior to the classification layer <|cite_start|> (Reference: RISE: Randomized Input Sampling for Explanation of Black-box Models: Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on black-box models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches. Project page: http://cs-people.bu.edu/vpetsiuk/rise/) <|cite_end|>. Later variants such as Grad-CAM relax this restriction by using average channel gradients as weights. However, Grad-CAM <|cite_start|> (Reference: Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization: ) <|cite_end|> is also less discriminative, and is unable to locate multiple occurrences of target objects. Grad-CAM++ <|cite_start|> (Reference: {Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks: Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision based problems. However, deep models are perceived as "black box" methods considering the lack of understanding of their internal functioning. There has been a significant recent interest to develop explainable deep learning models, and this paper is an effort in this direction. Building on a recently proposed method called Grad-CAM, we propose Grad-CAM++ to provide better visual explanations of CNN model predictions (when compared to Grad-CAM), in terms of better localization of objects as well as explaining occurrences of multiple objects of a class in a single image. We provide a mathematical explanation for the proposed method, Grad-CAM++, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the class label under consideration. Our extensive experiments and evaluations, both subjective and objective, on standard datasets showed that Grad-CAM++ indeed provides better visual explanations for a given CNN architecture when compared to Grad-CAM.) <|cite_end|> uses positive partial derivatives of features maps as weights. Smooth Grad-CAM++ <|cite_start|> (Reference: Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models: Gaining insight into how deep convolutional neural network models perform image classification and how to explain their outputs have been a concern to computer vision researchers and decision makers. These deep models are often referred to as black box due to low comprehension of their internal workings. As an effort to developing explainable deep learning models, several methods have been proposed such as finding gradients of class output with respect to input image (sensitivity maps), class activation map (CAM), and Gradient based Class Activation Maps (Grad-CAM). These methods under perform when localizing multiple occurrences of the same class and do not work for all CNNs. In addition, Grad-CAM does not capture the entire object in completeness when used on single object images, this affect performance on recognition tasks. With the intention to create an enhanced visual explanation in terms of visual sharpness, object localization and explaining multiple occurrences of objects in a single image, we present Smooth Grad-CAM++ \footnote{Simple demo: http://35.238.22.135:5000/}, a technique that combines methods from two other recent techniques---SMOOTHGRAD and Grad-CAM++. Our Smooth Grad-CAM++ technique provides the capability of either visualizing a layer, subset of feature maps, or subset of neurons within a feature map at each instance at the inference level (model prediction process). After experimenting with few images, Smooth Grad-CAM++ produced more visually sharp maps with better localization of objects in the given input images when compared with other methods.) <|cite_end|> combines techniques from both Grad-CAM++ and SmoothGrad to generate sharper visualisations.
Perturbation methods are inefficient, as they send multiple queries to the model. For example, RISE <|cite_start|> (Reference: RISE: Randomized Input Sampling for Explanation of Black-box Models: Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on black-box models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches. Project page: http://cs-people.bu.edu/vpetsiuk/rise/) <|cite_end|> sends 8000 queries to the model to evaluate the importance of regions covered by 8000 randomly selected masks. Recent works reduce the number of masks by using channels in CAMs as masks. For instance, Score-CAM uses all channels in CAMs, while Group-CAM further minimises the number of masks by grouping the channels of CAMs.
All the aforementioned methods have successfully improved certain issues relating to saliency methods, but have limited transferability and versatility. In comparison, SESS is a model and method agnostic extension, which can be applied to any existing saliency approach (though we note that single-pass or efficient multi-pass methods are most suitable). Moreover, SESS is robust to scale-variance, noise, multiple occurrences and distractors. SESS can generate clean and focused saliency maps, and significantly improves the performance of saliency methods for image recognition and detection tasks. <|paper_end|> | [
"<|reference_start|> Top-down Neural Attention by Excitation Backprop: We aim to model the top-down attention of a Convolutional Neural Network (CNN) classifier for generating task-specific attention maps. Inspired by a top-down human visual attention model, we propose a new backpropagation scheme, called Excitation Backprop, to pass along top-down signals downwards in the network hierarchy via a probabilistic Winner-Take-All process. Furthermore, we introduce the concept of contrastive attention to make the top-down attention maps more discriminative. In experiments, we demonstrate the accuracy and generalizability of our method in weakly supervised localization tasks on the MS COCO, PASCAL VOC07 and ImageNet datasets. The usefulness of our method is further validated in the text-to-region association task. On the Flickr30k Entities dataset, we achieve promising performance in phrase localization by leveraging the top-down attention of a CNN model that has been trained on weakly labeled web images. <|reference_end|>",
"<|reference_start|> RISE: Randomized Input Sampling for Explanation of Black-box Models: Deep neural networks are being used increasingly to automate data analysis and decision making, yet their decision-making process is largely unclear and is difficult to explain to the end users. In this paper, we address the problem of Explainable AI for deep neural networks that take images as input and output a class probability. We propose an approach called RISE that generates an importance map indicating how salient each pixel is for the model's prediction. In contrast to white-box approaches that estimate pixel importance using gradients or other internal network state, RISE works on black-box models. It estimates importance empirically by probing the model with randomly masked versions of the input image and obtaining the corresponding outputs. We compare our approach to state-of-the-art importance extraction methods using both an automatic deletion/insertion metric and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets show that our approach matches or exceeds the performance of other methods, including white-box approaches. Project page: http://cs-people.bu.edu/vpetsiuk/rise/ <|reference_end|>",
"<|reference_start|> Axiomatic Attribution for Deep Networks: We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms---Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better. <|reference_end|>",
"<|reference_start|> Smooth Grad-CAM++: An Enhanced Inference Level Visualization Technique for Deep Convolutional Neural Network Models: Gaining insight into how deep convolutional neural network models perform image classification and how to explain their outputs have been a concern to computer vision researchers and decision makers. These deep models are often referred to as black box due to low comprehension of their internal workings. As an effort to developing explainable deep learning models, several methods have been proposed such as finding gradients of class output with respect to input image (sensitivity maps), class activation map (CAM), and Gradient based Class Activation Maps (Grad-CAM). These methods under perform when localizing multiple occurrences of the same class and do not work for all CNNs. In addition, Grad-CAM does not capture the entire object in completeness when used on single object images, this affect performance on recognition tasks. With the intention to create an enhanced visual explanation in terms of visual sharpness, object localization and explaining multiple occurrences of objects in a single image, we present Smooth Grad-CAM++ \\footnote{Simple demo: http://35.238.22.135:5000/}, a technique that combines methods from two other recent techniques---SMOOTHGRAD and Grad-CAM++. Our Smooth Grad-CAM++ technique provides the capability of either visualizing a layer, subset of feature maps, or subset of neurons within a feature map at each instance at the inference level (model prediction process). After experimenting with few images, Smooth Grad-CAM++ produced more visually sharp maps with better localization of objects in the given input images when compared with other methods. <|reference_end|>"
] | [
14,
25,
29,
45
] | {"<|cite_1|>": "arxiv-89003", "<|cite_2|>": "arxiv-70637", "<|cite_3|>": "arxiv-226918", "<|cite_4|>": "arxiv-126604", "<|cite_5|>": "ss-680203", "<|cite_6|>": "arxiv-163107", "<|multi_cite_7_1|>": "arxiv-317411", "<|multi_cite_7_2|>": "arxiv-258501", "<|multi_cite_8_1|>": "ss-788111", "<|multi_cite_8_2|>": "arxiv-217389", "<|multi_cite_9_1|>": "arxiv-54326", "<|multi_cite_9_2|>": "arxiv-118182", "<|cite_10|>": "arxiv-65675", "<|cite_11|>": "arxiv-258501", "<|cite_12|>": "arxiv-103169", "<|multi_cite_13_1|>": "arxiv-54326", "<|multi_cite_13_2|>": "arxiv-126604", "<|multi_cite_13_3|>": "arxiv-118182", "<|multi_cite_14_1|>": "arxiv-89003", "<|multi_cite_14_2|>": "ss-680203", "<|multi_cite_14_3|>": "arxiv-329861", "<|multi_cite_14_4|>": "arxiv-226918", "<|multi_cite_15_1|>": "arxiv-121416", "<|multi_cite_15_2|>": "arxiv-163107", "<|multi_cite_15_3|>": "ss-1257113", "<|multi_cite_16_1|>": "arxiv-163107", "<|multi_cite_16_2|>": "arxiv-121416", "<|multi_cite_17_1|>": "arxiv-54326", "<|multi_cite_17_2|>": "arxiv-126604", "<|multi_cite_17_3|>": "arxiv-118182", "<|multi_cite_17_4|>": "arxiv-89003", "<|multi_cite_17_5|>": "ss-680203", "<|cite_18|>": "arxiv-126604", "<|cite_19|>": "arxiv-118182", "<|cite_20|>": "ss-781884", "<|cite_21|>": "arxiv-217389", "<|cite_22|>": "ss-1257113", "<|cite_23|>": "arxiv-226918", "<|cite_24|>": "arxiv-329861", "<|cite_25|>": "arxiv-70637", "<|cite_26|>": "arxiv-70637", "<|cite_27|>": "arxiv-126604", "<|cite_28|>": "arxiv-163107", "<|cite_29|>": "ss-680203", "<|cite_30|>": "ss-788111", "<|cite_31|>": "arxiv-217389", "<|cite_32|>": "arxiv-163107"} |
2002.04006 | <|paper_start|> Title: New superconvergent structures developed from the finite volume element method in 1D
Abstract: New superconvergent structures developed from the finite volume element method in 1D: New superconvergent structures are introduced by the finite volume element method (FVEM), which allow us to choose the superconvergent points freely. The general orthogonal condition and the modified M-decomposition (MMD) technique are established to prove the superconvergence properties of the new structures. In addition, the relationships between the orthogonal condition and the convergence properties for the FVE schemes are carried out in Table 1. Numerical results are given to illustrate the theoretical results.
Introduction
\label{intro}
The finite volume element method (FVEM) <|cite_start|> (Reference: Some errors estimates for the box method: We define and analyze several variants of the box method for discretizing elliptic boundary value problems in the plane. Our estimates show the error to be comparable to a standard Galerkin finite element method using piecewise linear polynomials.) <|cite_end|> <|cite_start|> (Reference: Finite volume methods: Foundation and analysis: Finite volume methods are a class of discretization schemes that have proven highly successful in approximating the solution of a wide variety of conservation law systems. They are extensively used in fluid mechanics, meteorology, electromagnetics, semi-conductor device simulation, models of biological processes and many other engineering areas governed by conservative systems that can be written in integral control volume form. This article reviews elements of the foundation and analysis of modern finite volume methods. The primary advantages of these methods are numerical robustness through the obtention of discrete maximum (minimum) principles, applicability on very general unstructured meshes, and the intrinsic local conservation properties of the resulting schemes. Throughout this article, specific attention is given to scalar nonlinear hyperbolic conservation laws and the development of high order accurate schemes for discretizing them. A key tool in the design and analysis of finite volume schemes suitable for discontinuity capturing is discrete maximum principle analysis. A number of building blocks used in the development of numerical schemes possessing local discrete maximum principles are reviewed in one and several space dimensions, e.g. monotone fluxes, TVD discretization, positive coecient discretization, non-oscillatory reconstruction, slope limiters, etc. When available, theoretical results concerning a priori and a posteriori error estimates are given. Further advanced topics are then considered such as high order time integration, discretization of diusion terms and the extension to systems of nonlinear conservation laws.) <|cite_end|> <|cite_start|> (Reference: On the finite volume element method: ) <|cite_end|> <|cite_start|> (Reference: The Finite Volume Element Method for Diffusion Equations on General Triangulations: This paper develops discretization error estimates for the finite volume element method on general triangulations of a polygonal domain in $\mathcal{R}^2 $ using a special type of control volume. The theory applies to diffusion equations of the form \[ \begin{gathered} - \nabla (A\nabla u) = f\quad {\text{in }}\Omega , \hfill \\ u = 0\quad {\text{on }}\partial \Omega . \hfill \\ \end{gathered} \] Under fairly general conditions, the theory establishes $O(h)$ estimates of the error in a discrete $\mathcal{H}^1 $ seminorm. Under an additional assumption concerning local uniformity of the triangulation, the estimate is improved to $O(h^2 )$.) <|cite_end|> <|cite_start|> (Reference: A New Class of High Order Finite Volume Methods for Second Order Elliptic Equations: In the numerical simulation of many practical problems in physics and engineering, finite volume methods are an important and popular class of discretization methods due to the local conservation and the capability of discretizing domains with complex geometry. However, they are limited by low order approximation since most existing finite volume methods use piecewise constant or linear function space to approximate the solution. In this paper, a new class of high order finite volume methods for second order elliptic equations is developed by combining high order finite element methods and linear finite volume methods. Optimal convergence rate in $H^1$-norm for our new quadratic finite volume methods over two-dimensional triangular or rectangular grids is obtained.) <|cite_end|> <|cite_start|> (Reference: A construction of higher-order finite volume methods: . We provide a method for the construction of higher-order finite volume methods (FVMs) for solving boundary value problems of the two di- mensional elliptic equations. Specifically, when the trial space of the FVM is chosen to be a conforming triangle mesh finite element space, we describe a construction of the associated test space that guarantees the uniform local-ellipticity of the family of the resulting discrete bilinear forms. We show that the uniform local-ellipticity ensures that the resulting FVM has a unique solution which enjoys an optimal error estimate. We characterize the uniform local- ellipticity in terms of the uniform boundedness (below by a positive constant) of the smallest eigenvalues of the matrices associated with the FVMs. We then translate the characterization to equivalent requirements on the shapes of the triangle meshes for the trial spaces. Four convenient sufficient conditions for the family of the discrete bilinear forms to be uniformly local-elliptic are de- rived from the characterization. Following the general procedure, we construct four specific FVMs which satisfy the uniform local-ellipticity. Numerical re- sults are presented to verify the theoretical results on the convergence order of the FVMs.) <|cite_end|> <|cite_start|> (Reference: Superconvergence of finite volume methods for the second order elliptic problem: ) <|cite_end|> <|cite_start|> (Reference: On first and second order box schemes: ) <|cite_end|> <|cite_start|> (Reference: Generalized Difference Methods for Differential Equations: Numerical Analysis of Finite Volume Methods: Preliminaries two point boundary value problems second order elliptic equations fourth order and nonlinear elliptic equations parabolic equations hyperbolic equations convection-dominated diffusion problems applications.) <|cite_end|> <|cite_start|> (Reference: Conditioning of the finite volume element method for diffusion problems with general simplicial meshes: The conditioning of the linear finite volume element discretization for general diffusion equations is studied on arbitrary simplicial meshes. The condition number is defined as the ratio of the maximal singular value of the stiffness matrix to the minimal eigenvalue of its symmetric part. This definition is motivated by the fact that the convergence rate of the generalized minimal residual method for the corresponding linear systems is determined by the ratio. An upper bound for the ratio is established by developing an upper bound for the maximal singular value and a lower bound for the minimal eigenvalue of the symmetric part. It is shown that the bound depends on three factors, the number of the elements in the mesh, the mesh nonuniformity measured in the Euclidean metric, and the mesh nonuniformity measured in the metric specified by the inverse diffusion matrix. It is also shown that the diagonal scaling can effectively eliminates the effects from the mesh nonuniformity measured in the Euclidean metric. Numerical results for a selection of examples in one, two, and three dimensions are presented.) <|cite_end|>, which is famous for the local conservation property,
has been studied widely for the stability and $H^1$ estimate <|cite_start|> (Reference: Higher-order finite volume methods for elliptic boundary value problems: ) <|cite_end|> <|cite_start|> (Reference: A construction of higher-order finite volume methods: . We provide a method for the construction of higher-order finite volume methods (FVMs) for solving boundary value problems of the two di- mensional elliptic equations. Specifically, when the trial space of the FVM is chosen to be a conforming triangle mesh finite element space, we describe a construction of the associated test space that guarantees the uniform local-ellipticity of the family of the resulting discrete bilinear forms. We show that the uniform local-ellipticity ensures that the resulting FVM has a unique solution which enjoys an optimal error estimate. We characterize the uniform local- ellipticity in terms of the uniform boundedness (below by a positive constant) of the smallest eigenvalues of the matrices associated with the FVMs. We then translate the characterization to equivalent requirements on the shapes of the triangle meshes for the trial spaces. Four convenient sufficient conditions for the family of the discrete bilinear forms to be uniformly local-elliptic are de- rived from the characterization. Following the general procedure, we construct four specific FVMs which satisfy the uniform local-ellipticity. Numerical re- sults are presented to verify the theoretical results on the convergence order of the FVMs.) <|cite_end|> <|cite_start|> (Reference: GENERALIZED DIFFERENCE METHODS ON ARBITRARY QUADRILATERAL NETWORKS: ) <|cite_end|> <|cite_start|> (Reference: The finite volume element method with quadratic basis functions: ) <|cite_end|> <|cite_start|> (Reference: Box schemes on quadrilateral meshes: ) <|cite_end|> <|cite_start|> (Reference: Analysis of linear and quadratic simplicial finite volume methods for elliptic equations: ) <|cite_end|> <|cite_start|> (Reference: Vertex-centered finite volume schemes of any order over quadrilateral meshes for elliptic boundary value problems: ) <|cite_end|>,
$L^2$ estimate <|cite_start|> (Reference: A Note on the Optimal L2-Estimate of the Finite Volume Element Method: ) <|cite_end|> <|cite_start|> (Reference: On The Accuracy Of The Finite Volume Element Method Based On Piecewise Linear Polynomials: We present a general error estimation framework for a finite volume element (FVE) method based on linear polynomials for solving second-order elliptic boundary value problems. This framework treats the FVE method as a perturbation of the Galerkin finite element method and reveals that regularities in both the exact solution and the source term can affect the accuracy of FVE methods. In particular, the error estimates and counterexamples in this paper will confirm that the FVE method cannot have the standard O(h2) convergence rate in the L2 norm when the source term has the minimum regularity, only being in L2, even if the exact solution is in H2.) <|cite_end|> <|cite_start|> (Reference: On the Finite Volume Element Method for General Self-Adjoint Elliptic Problems: The finite volume element method (FVE) is a discretization technique for partial differential equations. This paper develops discretization energy error estimates for general self-adjoint elliptic boundary value problems with FVE based on triangulations, on which there exist linear finite element spaces, and a very general type of control volumes (covolumes).
The energy error estimates of this paper are also optimal but the restriction conditions for the covolumes given in [R. E. Bank and D. J. Rose, SIAM J. Numer. Anal., 24 (1987), pp. 777--787], [Z. Q. Cai, Numer. Math., 58 (1991), pp. 713--735] are removed. The authors finally provide a counterexample to show that an expected L2-error estimate does not exist in the usual sense. It is conjectured that the optimal order of $\|u-u_h\|_{0,\Omega}$ should be O(h) for the general case.) <|cite_end|> <|cite_start|> (Reference: L2 Error Estimates for a Class of Any Order Finite Volume Schemes Over Quadrilateral Meshes: In this paper, we propose a unified $ L^2 $ error estimate for a class of bi-$r$ finite volume (FV) schemes on a quadrilateral mesh for elliptic equations, where $r\ge 1$ is arbitrary. The main result is to show that the FV solution possesses the optimal order $L^2 $ error provided that $(u,f) \in H^{r+1} \times H^r $, where $u$ is the exact solution and $f$ is the source term of the elliptic equation. Our analysis includes two basic ideas: (1) By the Aubin--Nistche technique, the $L^2$ error estimate of an FV scheme can be reduced to the analysis of the difference of bilinear forms and right-hand sides between the FV and its corresponding finite element (FE) equations, respectively; (2) with the help of a special transfer operator from the trial to test space, the difference between the FV and FE equations can be estimated through analyzing the effect of some Gauss quadrature. Numerical experiments are given to demonstrate the proved results.) <|cite_end|> <|cite_start|> (Reference: L2 error estimate of the finite volume element methods on quadrilateral meshes: ) <|cite_end|> <|cite_start|> (Reference: Optimal Biquadratic Finite Volume Element Methods on Quadrilateral Meshes: In this paper, an optimal biquadratic finite volume element scheme is established and analyzed for elliptic equations on quadrilateral meshes. It is proved that the new scheme has optimal $O(h^2)$ convergence rate in $H^1$ norm and $O(h^3)$ convergence rate in $L^2$ norm with the assumption that each element is an $h^2$-parallelogram.) <|cite_end|> <|cite_start|> (Reference: L2 Error Estimates for High Order Finite Volume Methods on Triangular Meshes: We establish a unified framework for $L^2$ error analysis for high order Lagrange finite volume methods on triangular meshes. Orthogonal conditions are originally proposed to construct dual partitions on triangular meshes, such that the corresponding finite volume method (FVM) schemes hold optimal $L^2$ norm convergence order. Moreover, with the Aubin--Nitsche technique, we prove the optimal $L^2$ error estimate for high order FVM schemes on triangular meshes. Some numerical experiments are presented to demonstrate the proved result.) <|cite_end|>, and superconvergence <|cite_start|> (Reference: Superconvergence of Any Order Finite Volume Schemes for 1D General Elliptic Equations: ) <|cite_end|> <|cite_start|> (Reference: Is 2k-Conjecture Valid for Finite Volume Methods?: This paper is concerned with superconvergence properties of a class of finite volume methods of arbitrary order over rectangular meshes. Our main result is to prove the 2k-conjecture: at each vertex of the underlying rectangular mesh, the bi-$k$ degree finite volume solution approximates the exact solution with an order $ O(h^{2k})$, where $h$ is the mesh size. As byproducts, superconvergence properties for finite volume discretization errors at Lobatto and Gauss points are also obtained. All theoretical findings are confirmed by numerical experiments.) <|cite_end|> <|cite_start|> (Reference: SUPERCONVERGENCE OF GENERALIZED DIFFERENCE METHOD FOR ELLIPTIC BOUNDARY VALUE PROBLEM: Some superconvergence results of generalized difference solution for elliptic boundary value problem are given. It is shown that optimal points of the stresses for generalized difference method are the same as that for finite element method.) <|cite_end|> <|cite_start|> (Reference: L2 error estimates and superconvergence of the finite volume element methods on quadrilateral meshes: ) <|cite_end|> <|cite_start|> (Reference: Superconvergence of quadratic finite volume method on triangular meshes: ) <|cite_end|>. In this paper, we mainly focus on the new superconvergent structures developed from the FVEM in 1D. To the authors' knowledge, almost all existing natural superconvergence results of the FEM/FVEM are based on the famous Gauss-Lobatto structure. It's interesting that, the new superconvergent structures introduced in this paper cover the Gauss-Lobatto structure and include much more new FVE schemes.
Superconvergence is the phenomenon that the numerical solution (or the post-processed solution) converges faster than the generally expected rate at certain points or with certain metric.
It is an important issue, which helps to improve the accuracy of numerical methods such as the finite element method (FEM) <|cite_start|> (Reference: Superconvergence of the gradient for quadratic triangular finite elements: ) <|cite_end|> <|cite_start|> (Reference: Superconvergence in Finite Element Methods and Meshes That are Locally Symmetric with Respect to a Point: Consider a second-order elliptic boundary value problem in any number of space dimensions with locally smooth coefficients and solution. Consider also its numerical approximation by standard conforming finite element methods with, for example, fixed degree piecewise polynomials on a quasi-uniform mesh-family (the “h-method”). It will be shown that, if the finite element function spaces are locally symmetric about a point $x_0 $ with respect to the antipodal map $x \to x_0 - (x - x_0 )$, then superconvergence ensues at xo under mild conditions on what happens outside a neighborhood of $x_0 $. For piecewise polynomials of even degree, superconvergence occurs in function values; for piecewise polynomials of odd degree, it occurs in derivatives.) <|cite_end|> <|cite_start|> (Reference: High order local approximations to derivatives in the finite element method: Consider the approximation of the solution u of an elliptic boundary value problem by means of a finite element Galerkin method of order r, so that the approximate solution uh satisfies uh u = O(hr). Bramble and Schatz (Math. Comp., v. 31, 1977, pp. 94-111) have constructed, for elements satisfying certain uniformity conditions, a simple function K. such that Kh * Uh u = O(h2r-2) in the interior. Their result is generalized here to obtain similar superconvergent order interior approximations also for derivatives of u.) <|cite_end|> <|cite_start|> (Reference: Superconvergence in Galerkin Finite Element Methods: ) <|cite_end|> <|cite_start|> (Reference: Superconvergence of Finite Element Approximations for the Stokes Problem by Projection Methods: This paper derives a general superconvergence result for finite element approximations of the Stokes problem by using projection methods proposed and analyzed recently by Wang [J. Math. Study, 33 (2000), pp. 229--243] for the standard Galerkin method. The superconvergence result is based on some regularity assumption for the Stokes problem and is applicable to any finite element method with regular but nonuniform partitions. The method is proved to give a convergent scheme for certain finite element spaces which fail to satisfy the well-known uniform inf-sup condition of Brezzi and Babuska.) <|cite_end|> and the finite volume element method (FVEM) <|cite_start|> (Reference: Some errors estimates for the box method: We define and analyze several variants of the box method for discretizing elliptic boundary value problems in the plane. Our estimates show the error to be comparable to a standard Galerkin finite element method using piecewise linear polynomials.) <|cite_end|> <|cite_start|> (Reference: On the finite volume element method: ) <|cite_end|> <|cite_start|> (Reference: Is 2k-Conjecture Valid for Finite Volume Methods?: This paper is concerned with superconvergence properties of a class of finite volume methods of arbitrary order over rectangular meshes. Our main result is to prove the 2k-conjecture: at each vertex of the underlying rectangular mesh, the bi-$k$ degree finite volume solution approximates the exact solution with an order $ O(h^{2k})$, where $h$ is the mesh size. As byproducts, superconvergence properties for finite volume discretization errors at Lobatto and Gauss points are also obtained. All theoretical findings are confirmed by numerical experiments.) <|cite_end|> <|cite_start|> (Reference: L2 error estimates and superconvergence of the finite volume element methods on quadrilateral meshes: ) <|cite_end|> <|cite_start|> (Reference: Superconvergence of finite volume element method for elliptic problems: ) <|cite_end|> <|cite_start|> (Reference: Vertex-centered finite volume schemes of any order over quadrilateral meshes for elliptic boundary value problems: ) <|cite_end|> etc..
The study on superconvergence mainly lies in three aspects:
1) \textbf{the natural superconvergence}, in which the numerical solution superconverges to the exact solution at certain points, such as the famous Gauss-Lobatto structure for the FEM/FVEM, which gives superconvergent points at Gauss points (of the derivative/gradient)
or at Lobatto points (of the function value) (see <|cite_start|> (Reference: Superconvergence in the generalized finite element method: ) <|cite_end|> <|cite_start|> (Reference: Superconvergence of Any Order Finite Volume Schemes for 1D General Elliptic Equations: ) <|cite_end|> <|cite_start|> (Reference: Superconvergence by M -decompositions. Part II: Construction of two-dimensional finite elements: We apply the concept of an M -decomposition introduced in Part I to systematically construct local spaces defining superconvergent hybridizable discontinuous Galerkin methods, and their companion sandwiching mixed methods. This is done in the framework of steady-state diffusion problems for the h - and p -versions of the methods for general polygonal meshes in two-space dimensions.) <|cite_end|> <|cite_start|> (Reference: Superconvergence of quadratic finite volume method on triangular meshes: ) <|cite_end|>);
2) \textbf{the global superconvergence}, in which there exists a piecewise $k$-order approximation $u_I$ of $u$, such that we have the estimate $\|u_h-u_I\|_{0}=O(h^{k+2})$ or $|u_h-u_I|_{1}=O(h^{k+1})$ for the numerical solution $u_h$. The global superconvergence
results are the theoretical foundation of the other two types of superconvergence (see <|cite_start|> (Reference: Superconvergence in the generalized finite element method: ) <|cite_end|> <|cite_start|> (Reference: Superconvergence of finite volume methods for the second order elliptic problem: ) <|cite_end|> <|cite_start|> (Reference: On superconvergence techniques: ) <|cite_end|> <|cite_start|> (Reference: Superconvergence of quadratic finite volume method on triangular meshes: ) <|cite_end|>);
3) \textbf{the post-processed superconvergence}, in which the post-processed solution superconverges to the exact solution
in some norm (see <|cite_start|> (Reference: Asymptotically exact a posteriori error estimators, part I: Grids with superconvergence: In Part I of this work, we develop superconvergence estimates for piecewise linear finite element approximations on quasi-uniform triangular meshes where most pairs of triangles sharing a common edge form approximate parallelograms. In particular, we first show a superconvergence of the gradient of the finite element solution uh and to the gradient of the interpolant uI. We then analyze a postprocessing gradient recovery scheme, showing that $Q_h\nabla u_h$ is a superconvergent approximation to $\nabla u$. Here Qh is the global L2 projection. In Part II, we analyze a superconvergent gradient recovery scheme for general unstructured, shape regular triangulations. This is the foundation for an a posteriori error estimate and local error indicators.) <|cite_end|> <|cite_start|> (Reference: A posteriori error estimates for finite volume method based on bilinear trial functions for the elliptic equation: ) <|cite_end|> <|cite_start|> (Reference: Postprocessing of a finite volume element method for semilinear parabolic problems: In this paper, we study a postprocessing procedure for improving accuracy of the finite volume element approximations of semilinear parabolic problems. The procedure amounts to solve a source problem on a coarser grid and then solve a linear elliptic problem on a finer grid after the time evolution is finished. We derive error estimates in the L 2 and H 1 norms for the standard finite volume element scheme and an improved error estimate in the H 1 norm. Numerical results demonstrate the accuracy and efficiency of the procedure.) <|cite_end|>). The superconvergent patch recovery (SPR) <|cite_start|> (Reference: The superconvergent patch recovery and a posteriori error estimates. Part 1: The recovery technique: This is the first of two papers concerning superconvergent recovery techniques and a posteriori error estimation. In this paper, a general recovery technique is developed for determining the derivatives (stresses) of the finite element solutions at nodes. The implementation of the recovery technique is simple and cost effective. The technique has been tested for a group of widely used linear, quadratic and cubic elements for both one and two dimensional problems. Numerical experiments demonstrate that the recovered nodal values of the derivatives with linear and cubic elements are superconvergent. One order higher accuracy is achieved by the procedure with linear and cubic elements but two order higher accuracy is achieved for the derivatives with quadratic elements. In particular, an O(h4) convergence of the nodal values of the derivatives for a quadratic triangular element is reported for the first time. The performance of the proposed technique is compared with the widely used smoothing procedure of global L2 projection and other methods. It is found that the derivatives recovered at interelement nodes, by using L2 projection, are also superconvergent for linear elements but not for quadratic elements. Numerical experiments on the convergence of the recovered solutions in the energy norm are also presented. Higher rates of convergence are again observed. The results presented in this part of the paper indicate clearly that a new, powerful and economical process is now available which should supersede the currently used post-processing procedures applied in most codes.) <|cite_end|>
and polynomial preserving recovery (PPR) <|cite_start|> (Reference: A new finite element gradient recovery method:
superconvergence property: This is the first in a series of papers in which a new gradient recovery method is introduced and analyzed. It is proved that the method is superconvergent for translation invariant finite element spaces of any order. The method maintains the simplicity, efficiency, and superconvergence properties of the Zienkiewicz--Zhu patch recovery method. In addition, for uniform triangular meshes, the method is superconvergent for the linear element under the chevron pattern, and ultraconvergent at element edge centers for the quadratic element under the regular pattern. Applications of this new gradient recovery technique will be discussed in forthcoming papers.) <|cite_end|> are two typical examples of the post-processed superconvergence technique. In this paper, we mainly talk about the first two aspects of superconvergence for the FVEM.
We first propose the general $k$-$r$-order $(k-1\leq r\leq 2k-2)$ orthogonal condition and the modified M-decomposition (MMD) technique for the FVEM, which help to discover and prove the new superconvergent structures. For the $k$-order FVEM, the general $k$-$r$-order orthogonal condition means $r$-order orthogonality
to a polynomial space in the sense of inner product. The dual points as well as the interpolation nodes of the trial-to-test operator are variable, which make it possible to design more FVE schemes for given order $k$. The general $k$-$r$-order orthogonal condition is a generalization of the $k$-$(k-1)$-order orthogonal condition proposed in <|cite_start|> (Reference: L2 Error Estimates for High Order Finite Volume Methods on Triangular Meshes: We establish a unified framework for $L^2$ error analysis for high order Lagrange finite volume methods on triangular meshes. Orthogonal conditions are originally proposed to construct dual partitions on triangular meshes, such that the corresponding finite volume method (FVM) schemes hold optimal $L^2$ norm convergence order. Moreover, with the Aubin--Nitsche technique, we prove the optimal $L^2$ error estimate for high order FVM schemes on triangular meshes. Some numerical experiments are presented to demonstrate the proved result.) <|cite_end|>. We still call it the orthogonal condition without causing confusion.
On the other hand, when analyzing the superconvergence of the FEM, it's very technical to find a proper superclose function $u_I$, which bridges the exact solution $u$ and the numerical solution $u_h$. Researchers often decompose the difference $u-u_I$ into a linear combination of the M-polynomials, in which the coefficients of the M-polynomial combinations are obtained through restricting $u-u_I$ for better properties. This method of obtaining the superclose function is called the M-decomposition technique (see <|cite_start|> (Reference: Superconvergence by M -decompositions. Part II: Construction of two-dimensional finite elements: We apply the concept of an M -decomposition introduced in Part I to systematically construct local spaces defining superconvergent hybridizable discontinuous Galerkin methods, and their companion sandwiching mixed methods. This is done in the framework of steady-state diffusion problems for the h - and p -versions of the methods for general polygonal meshes in two-space dimensions.) <|cite_end|>). The M-decomposition technique works well for the FEM, however it usually fails to be applied to the FVEM directly. For this reason, we propose the modified M-decomposition technique to obtain an appropriate superclose function for the FVEM.
Then, with the help of the orthogonal condition and the MMD technique, we construct and prove the new superconvergent structures for the FVEM. It's shown that, for given $k$-order ($k\geq3$) FVEM, there are much more than one scheme having the superconvergence property. As examples, the relationships between the Gauss-Lobatto structure based and the orthogonal condition based FVE schemes are shown in Figure~\ref{fig:alpha4A5order} (a) for $k=4$ and Figure~\ref{fig:alpha4A5order} (b) for $k=5$.
Furthermore, we also carry out easy ways to construct FVE schemes with superconvergence (see {\em Method I} and {\em Method II} in subsection~\ref{subsec:construction_easy}):
for odd $k$-order FVEM, we can freely choose the $k$ symmetric derivative superconvergent points on any primary element $K$ (excluding the endpoints of $K$);
and for even $k$-order FVEM, we can freely choose the $(k+1)$ symmetric function value superconvergent points on any primary element $K$ (including two endpoints of $K$).
Moreover, for the completeness of the theory, we also present the proof of unconditionally stability, $H^1$ estimates and give the optimal $L^2$ estimates as a side product of the superconvergence of the derivative. Here, we show in Table~\ref{tab:relations} the relationships between the orthogonal condition and the convergence properties for the FVE schemes over symmetric dual meshes:
1) all FVE schemes possess the optimal $H^1$ estimates; 2) the $k$-$(k-1)$-order orthogonal condition ensures the optimal $L^2$ estimates and superconvergence of the derivative (see ``superconv 1'' in Table~\ref{tab:relations}); 3) the $k$-$k$-order orthogonal condition ensures the superconvergence of the function value (see ``superconv 2'' in Table~\ref{tab:relations}).
That is to say, when the $k$-$(k-1)$-order or $k$-$k$-order orthogonal condition is satisfied, the corresponding FVE schemes hold superconvergence properties. We call this superconvergent structure the ``orthogonal structure''.
\begin{table}[htbp!]
\centering
\caption{Relations between the orthogonal condition and properties of FVE schemes in 1D}\label{tab:relations}
\begin{tabular}{c|c|cccc}
\toprule
\multicolumn{2}{c}{FVE schemes} & \multicolumn{4}{c}{Properties of FVE schemes}\\
\cline{1-6}
& orthogonal condition & optimal $H^1$ &optimal $L^2$ & superconv 1& superconv 2\\
\cline{1-6}
odd order & $k$-$(k-1)$-order & $\surd$ & $\surd$ & $\surd$ & - \\
\cline{2-6}
($k=2l-1$) & $k$-$k$-order & $\surd$ & $\surd$ & $\surd$ & $\surd$ \\
\cline{1-6}
& does not satisfy & \multirow{2}{*}{$\surd$} & \multirow{2}{*}{-}
& \multirow{2}{*}{-} & \multirow{2}{*}{-} \\
even order & the $k$-$(k-1)$-order & & & & \\
\cline{2-6}
($k=2l$) & $k$-$(k-1)$-order &\multirow{2}{*}{$\surd$} &\multirow{2}{*}{$\surd$}
&\multirow{2}{*}{$\surd$} &\multirow{2}{*}{$\surd$}\\
& ($k$-$k$-order) & & & & \\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item[1] 1. The ``$\surd$'' mark means holding, while the ``-'' mark means no results.
\end{tablenotes}
\end{table}
Following, we introduce the definition of the FVEM and some notations in section~\ref{sec:prelim}.
Then, the $k$-$r$-order orthogonal condition and the modified M-decomposition (MMD) are discussed in section~\ref{sec:OrthCond}.
In section~\ref{sec:Superconvergence}, we present the superconvergence of the derivative and the function value for FVEM.
In section~\ref{sec:construction}, we carry out the constructions of the FVE schemes with superconvergence.
Finally, we show numerical results in section~\ref{sec:numerical_ex} and make the conclusion in section~\ref{sec:conclusion}.
The stability and $H^1$ error estimate of the FVEM are provided in Appendix A. <|paper_end|> | [
"<|reference_start|> On the Finite Volume Element Method for General Self-Adjoint Elliptic Problems: The finite volume element method (FVE) is a discretization technique for partial differential equations. This paper develops discretization energy error estimates for general self-adjoint elliptic boundary value problems with FVE based on triangulations, on which there exist linear finite element spaces, and a very general type of control volumes (covolumes). \nThe energy error estimates of this paper are also optimal but the restriction conditions for the covolumes given in [R. E. Bank and D. J. Rose, SIAM J. Numer. Anal., 24 (1987), pp. 777--787], [Z. Q. Cai, Numer. Math., 58 (1991), pp. 713--735] are removed. The authors finally provide a counterexample to show that an expected L2-error estimate does not exist in the usual sense. It is conjectured that the optimal order of $\\|u-u_h\\|_{0,\\Omega}$ should be O(h) for the general case. <|reference_end|>",
"<|reference_start|> Optimal Biquadratic Finite Volume Element Methods on Quadrilateral Meshes: In this paper, an optimal biquadratic finite volume element scheme is established and analyzed for elliptic equations on quadrilateral meshes. It is proved that the new scheme has optimal $O(h^2)$ convergence rate in $H^1$ norm and $O(h^3)$ convergence rate in $L^2$ norm with the assumption that each element is an $h^2$-parallelogram. <|reference_end|>",
"<|reference_start|> Superconvergence of Finite Element Approximations for the Stokes Problem by Projection Methods: This paper derives a general superconvergence result for finite element approximations of the Stokes problem by using projection methods proposed and analyzed recently by Wang [J. Math. Study, 33 (2000), pp. 229--243] for the standard Galerkin method. The superconvergence result is based on some regularity assumption for the Stokes problem and is applicable to any finite element method with regular but nonuniform partitions. The method is proved to give a convergent scheme for certain finite element spaces which fail to satisfy the well-known uniform inf-sup condition of Brezzi and Babuska. <|reference_end|>",
"<|reference_start|> Asymptotically exact a posteriori error estimators, part I: Grids with superconvergence: In Part I of this work, we develop superconvergence estimates for piecewise linear finite element approximations on quasi-uniform triangular meshes where most pairs of triangles sharing a common edge form approximate parallelograms. In particular, we first show a superconvergence of the gradient of the finite element solution uh and to the gradient of the interpolant uI. We then analyze a postprocessing gradient recovery scheme, showing that $Q_h\\nabla u_h$ is a superconvergent approximation to $\\nabla u$. Here Qh is the global L2 projection. In Part II, we analyze a superconvergent gradient recovery scheme for general unstructured, shape regular triangulations. This is the foundation for an a posteriori error estimate and local error indicators. <|reference_end|>"
] | [
19,
22,
33,
48
] | {"<|multi_cite_1_1|>": "ss-1484983", "<|multi_cite_1_2|>": "ss-976621", "<|multi_cite_1_3|>": "ss-1255265", "<|multi_cite_1_4|>": "ss-1814640", "<|multi_cite_1_5|>": "ss-1484998", "<|multi_cite_1_6|>": "ss-1476893", "<|multi_cite_1_7|>": "ss-1255268", "<|multi_cite_1_8|>": "ss-2054925", "<|multi_cite_1_9|>": "ss-1814641", "<|multi_cite_1_10|>": "ss-1814642", "<|multi_cite_2_1|>": "ss-1476892", "<|multi_cite_2_2|>": "ss-1476893", "<|multi_cite_2_3|>": "ss-2054927", "<|multi_cite_2_4|>": "ss-2054921", "<|multi_cite_2_5|>": "ss-2054922", "<|multi_cite_2_6|>": "ss-1255269", "<|multi_cite_2_7|>": "ss-1395033", "<|multi_cite_3_1|>": "ss-1814643", "<|multi_cite_3_2|>": "ss-1724982", "<|multi_cite_3_3|>": "ss-1814644", "<|multi_cite_3_4|>": "ss-1814645", "<|multi_cite_3_5|>": "ss-1814646", "<|multi_cite_3_6|>": "ss-1814647", "<|multi_cite_3_7|>": "ss-1814648", "<|multi_cite_4_1|>": "ss-1255266", "<|multi_cite_4_2|>": "ss-1255267", "<|multi_cite_4_3|>": "ss-1814649", "<|multi_cite_4_4|>": "ss-1814650", "<|multi_cite_4_5|>": "ss-1814651", "<|multi_cite_5_1|>": "ss-1814652", "<|multi_cite_5_2|>": "ss-1814653", "<|multi_cite_5_3|>": "ss-1125358", "<|multi_cite_5_4|>": "ss-1255264", "<|multi_cite_5_5|>": "ss-1125360", "<|multi_cite_6_1|>": "ss-1484983", "<|multi_cite_6_2|>": "ss-1255265", "<|multi_cite_6_3|>": "ss-1255267", "<|multi_cite_6_4|>": "ss-1814650", "<|multi_cite_6_5|>": "ss-1814655", "<|multi_cite_6_6|>": "ss-1395033", "<|multi_cite_7_1|>": "ss-1814656", "<|multi_cite_7_2|>": "ss-1255266", "<|multi_cite_7_4|>": "ss-1814657", "<|multi_cite_7_6|>": "ss-1814651", "<|multi_cite_8_1|>": "ss-1814656", "<|multi_cite_8_2|>": "ss-1255268", "<|multi_cite_8_3|>": "ss-2481321", "<|multi_cite_8_4|>": "ss-1814651", "<|multi_cite_9_1|>": "ss-864842", "<|multi_cite_9_2|>": "ss-1814658", "<|multi_cite_9_3|>": "ss-1814659", "<|cite_10|>": "ss-1292299", "<|cite_11|>": "ss-760240", "<|cite_12|>": "ss-1814648", "<|multi_cite_13_2|>": "ss-1814657"} |
2405.09551 | <|paper_start|> Title: Towards Bi-Hemispheric Emotion Mapping through EEG: A Dual-Stream Neural Network Approach
Abstract: Towards Bi-Hemispheric Emotion Mapping through EEG: A Dual-Stream Neural Network Approach: Emotion classification through EEG signals plays a significant role in psychology, neuroscience, and human-computer interaction. This paper addresses the challenge of mapping human emotions using EEG data in the Mapping Human Emotions through EEG Signals FG24 competition. Subjects mimic the facial expressions of an avatar, displaying fear, joy, anger, sadness, disgust, and surprise in a VR setting. EEG data is captured using a multi-channel sensor system to discern brain activity patterns. We propose a novel two-stream neural network employing a Bi-Hemispheric approach for emotion inference, surpassing baseline methods and enhancing emotion recognition accuracy. Additionally, we conduct a temporal analysis revealing that specific signal intervals at the beginning and end of the emotion stimulus sequence contribute significantly to improve accuracy. Leveraging insights gained from this temporal analysis, our approach offers enhanced performance in capturing subtle variations in the states of emotions.
Introduction
Emotion is a central aspect of human experience, profoundly shaping our interactions and perceptions. While humans naturally excel at discerning emotions in others, replicating this nuanced understanding in computers remains a formidable task <|cite_start|> (Reference: {Toward an affect-sensitive multimodal human-computer interaction: The ability to recognize affective states of a person we are communicating with is the core of emotional intelligence. Emotional intelligence is a facet of human intelligence that has been argued to be indispensable and perhaps the most important for successful interpersonal social interaction. This paper argues that next-generation human-computer interaction (HCI) designs need to include the essence of emotional intelligence - the ability to recognize a user's affective states-in order to become more human-like, more effective, and more efficient. Affective arousal modulates all nonverbal communicative cues (facial expressions, body movements, and vocal and physiological reactions). In a face-to-face interaction, humans detect and interpret those interactive signals of their communicator with little or no effort. Yet design and development of an automated system that accomplishes these tasks is rather difficult. This paper surveys the past work in solving these problems by a computer and provides a set of recommendations for developing the first part of an intelligent multimodal HCI-an automatic personalized analyzer of a user's nonverbal affective feedback.) <|cite_end|>. Emotion recognition serves as a crucial step towards imbuing machines with the ability to comprehend and respond to human emotions, garnering significant interest from researchers in human-machine interaction (HMI) and pattern recognition <|cite_start|> (Reference: Smile Detection using Local Binary Patterns and Support Vector Machines: Facial expression recognition has been the subject of much research in the last years within the Computer Vision community. The detection of smiles, however, has received less attention. Its distinctive configuration may pose less problem than other, at times subtle, expressions. On the other hand, smiles can still be very useful as a measure of happiness, enjoyment or even approval. Geometrical or local-based detection approaches like the use of lip edges may not be robust enough and thus researchers have focused on applying machine learning to appearance-based descriptors. This work makes an extensive experimental study of smile detection testing the Local Binary Patterns (LBP) as main descriptors of the image, along with the powerful Support Vector Machines classifier. The results show that error rates can be acceptable, although there is still room for improvement.) <|cite_end|> <|cite_start|> (Reference: Facial expression analysis in a wild sporting environment: ) <|cite_end|>.
Traditionally, studies in emotion recognition have primarily focused on analyzing verbal and nonverbal cues, such as speech <|cite_start|> (Reference: Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on: ) <|cite_end|> and facial expressions <|cite_start|> (Reference: Towards Facial Expression Robustness in Multi-scale Wild Environments: ) <|cite_end|>. However, recent insights from neuroscience suggest that emotions originate from various regions of the brain, including the orbital frontal cortex, ventral medial prefrontal cortex, and amygdala <|cite_start|> (Reference: Neural correlates of social and nonsocial emotions: An fMRI study: ) <|cite_end|>. This neurobiological perspective presents an intriguing opportunity to decode emotions by capturing continuous brain activity signals from these sub-cortical regions.
In the context of virtual reality (VR), where subjects are fully immersed in carefully crafted environments, understanding and recognizing emotions take on a new dimension. By utilizing electrodes placed on the scalp to perform an electroencephalogram (EEG), researchers can record neural activity within the brain while the subjects interact with virtual avatars displaying various emotion's expressions. This approach offers a unique opportunity to directly observe the neural mechanisms underlying the processing of emotions in a controlled VR environment.
Analyzing EEG signals within the VR paradigm provides valuable insights into how the brain responds to emotion's stimuli in immersive environments. Existing EEG-based emotion recognition techniques tackle two primary challenges: feature extraction and accurate classification. EEG signals offer rich data across time, frequency, and \mbox{time-frequency} domains, necessitating robust feature extraction methods.
Previous works have already leveraged machine learning techniques evaluating EEG features, underscoring the complexity of this task <|cite_start|> (Reference: Feature Extraction and Selection for Emotion Recognition from EEG: Emotion recognition from EEG signals allows the direct assessment of the “inner” state of a user, which is considered an important factor in human-machine-interaction. Many methods for feature extraction have been studied and the selection of both appropriate features and electrode locations is usually based on neuro-scientific findings. Their suitability for emotion recognition, however, has been tested using a small amount of distinct feature sets and on different, usually small data sets. A major limitation is that no systematic comparison of features exists. Therefore, we review feature extraction methods for emotion recognition from EEG based on 33 studies. An experiment is conducted comparing these features using machine learning techniques for feature selection on a self recorded data set. Results are presented with respect to performance of different feature selection methods, usage of selected feature types, and selection of electrode locations. Features selected by multivariate methods slightly outperform univariate methods. Advanced feature extraction techniques are found to have advantages over commonly used spectral power bands. Results also suggest preference to locations over parietal and centro-parietal lobes.) <|cite_end|>. Subsequent challenges lie in effectively classifying these features. In this regard, some works have proposed a group of sparse canonical correlation analysis for simultaneous EEG channel selection and emotion recognition <|cite_start|> (Reference: Multichannel EEG-Based Emotion Recognition via Group Sparse Canonical Correlation Analysis: In this paper, a novel group sparse canonical correlation analysis (GSCCA) method is proposed for simultaneous electroencephalogram (EEG) channel selection and emotion recognition. GSCCA is a group sparse extension of the conventional CCA method to model the linear correlationship between emotional EEG class label vectors and the corresponding EEG feature vectors. In contrast to conventional CCA method or previous GSCCA methods, a major advantage of our GSCCA method is the ability of handling the group feature selection problem from raw EEG features, which makes it very suitable for simultaneously coping with both EEG emotion recognition and automatic channel selection issues where each EEG channel is associated with a group of raw EEG features. To deal with EEG emotion recognition problem, we adopt the popularly used frequency feature to describe the EEG signal by dividing the full EEG frequency band into five parts, i.e., <inline-formula> <tex-math notation="LaTeX">$\boldsymbol {\delta }$ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$\boldsymbol {\theta }$ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$\boldsymbol {\alpha }$ </tex-math></inline-formula>, <inline-formula> <tex-math notation="LaTeX">$\boldsymbol {\beta }$ </tex-math></inline-formula>, and <inline-formula> <tex-math notation="LaTeX">$\boldsymbol {\gamma }$ </tex-math></inline-formula> frequency bands, and then extract the frequency band features from each band for GSCCA model learning and emotion recognition. Finally, we conduct extensive experiments on EEG-based emotion recognition based on the SJTU emotion EEG dataset and experimental results demonstrate that the proposed GSCCA method would outperform the state-of-the-art EEG-based emotion recognition approaches.) <|cite_end|>, while others integrated brain activation patterns to enhance emotion's recognition performance <|cite_start|> (Reference: EEG Based Emotion Recognition by Combining Functional Connectivity Network and Local Activations: Objective: Spectral power analysis plays a predominant role in electroencephalogram-based emotional recognition. It can reflect activity differences among multiple brain regions. In addition to activation difference, different emotions also involve different large-scale network during related information processing. In this paper, both information propagation patterns and activation difference in the brain were fused to improve the performance of emotional recognition. Methods: We constructed emotion-related brain networks with phase locking value and adopted a multiple feature fusion approach to combine the compensative activation and connection information for emotion recognition. Results: Recognition results on three public emotional databases demonstrated that the combined features are superior to either single feature based on power distribution or network character. Furthermore, the conducted feature fusion analysis revealed the common characters between activation and connection patterns involved in the positive, neutral, and negative emotions for information processing. Significance: The proposed feasible combination of both information propagation patterns and activation difference in the brain is meaningful for developing the effective human–computer interaction systems by adapting to human emotions in the real world applications.) <|cite_end|>.
While these methodologies have demonstrated promising results on specific EEG emotion's datasets, there remains a need for further exploration and refinement to achieve robust and generalized emotion recognition systems. The proposed methodology contributes to the field of EEG-based emotion recognition in several significant ways:
\begin{itemize}
\item Incorporates a Bi-Hemispheric approach within a \mbox{two-stream} recurrent neural network, leveraging the distinct processing characteristics of each hemisphere to enhance emotion inference accuracy.
\item Conducts comparative analyses with one-stream neural networks, demonstrating the superiority of the \mbox{Bi-Hemispheric} approach in capturing subtle nuances of emotions.
\item Conducts a temporal analysis of EEG signals, identifying key temporal intervals (first and last intervals) that significantly contribute to emotion classification accuracy, enabling more efficient and precise emotion inference.
\end{itemize} <|paper_end|> | [
"<|reference_start|> Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on: <|reference_end|>",
"<|reference_start|> Towards Facial Expression Robustness in Multi-scale Wild Environments: <|reference_end|>",
"<|reference_start|> Feature Extraction and Selection for Emotion Recognition from EEG: Emotion recognition from EEG signals allows the direct assessment of the “inner” state of a user, which is considered an important factor in human-machine-interaction. Many methods for feature extraction have been studied and the selection of both appropriate features and electrode locations is usually based on neuro-scientific findings. Their suitability for emotion recognition, however, has been tested using a small amount of distinct feature sets and on different, usually small data sets. A major limitation is that no systematic comparison of features exists. Therefore, we review feature extraction methods for emotion recognition from EEG based on 33 studies. An experiment is conducted comparing these features using machine learning techniques for feature selection on a self recorded data set. Results are presented with respect to performance of different feature selection methods, usage of selected feature types, and selection of electrode locations. Features selected by multivariate methods slightly outperform univariate methods. Advanced feature extraction techniques are found to have advantages over commonly used spectral power bands. Results also suggest preference to locations over parietal and centro-parietal lobes. <|reference_end|>",
"<|reference_start|> Multichannel EEG-Based Emotion Recognition via Group Sparse Canonical Correlation Analysis: In this paper, a novel group sparse canonical correlation analysis (GSCCA) method is proposed for simultaneous electroencephalogram (EEG) channel selection and emotion recognition. GSCCA is a group sparse extension of the conventional CCA method to model the linear correlationship between emotional EEG class label vectors and the corresponding EEG feature vectors. In contrast to conventional CCA method or previous GSCCA methods, a major advantage of our GSCCA method is the ability of handling the group feature selection problem from raw EEG features, which makes it very suitable for simultaneously coping with both EEG emotion recognition and automatic channel selection issues where each EEG channel is associated with a group of raw EEG features. To deal with EEG emotion recognition problem, we adopt the popularly used frequency feature to describe the EEG signal by dividing the full EEG frequency band into five parts, i.e., <inline-formula> <tex-math notation=\"LaTeX\">$\\boldsymbol {\\delta }$ </tex-math></inline-formula>, <inline-formula> <tex-math notation=\"LaTeX\">$\\boldsymbol {\\theta }$ </tex-math></inline-formula>, <inline-formula> <tex-math notation=\"LaTeX\">$\\boldsymbol {\\alpha }$ </tex-math></inline-formula>, <inline-formula> <tex-math notation=\"LaTeX\">$\\boldsymbol {\\beta }$ </tex-math></inline-formula>, and <inline-formula> <tex-math notation=\"LaTeX\">$\\boldsymbol {\\gamma }$ </tex-math></inline-formula> frequency bands, and then extract the frequency band features from each band for GSCCA model learning and emotion recognition. Finally, we conduct extensive experiments on EEG-based emotion recognition based on the SJTU emotion EEG dataset and experimental results demonstrate that the proposed GSCCA method would outperform the state-of-the-art EEG-based emotion recognition approaches. <|reference_end|>"
] | [
3,
4,
6,
7
] | {"<|cite_1|>": "ss-1771200", "<|multi_cite_2_1|>": "ss-1166028", "<|multi_cite_2_2|>": "ss-1166029", "<|cite_3|>": "ss-700279", "<|cite_4|>": "ss-1166030", "<|cite_5|>": "ss-1166031", "<|cite_6|>": "ss-1166032", "<|cite_7|>": "ss-1166033", "<|cite_8|>": "ss-1861486"} |
1710.00073 | <|paper_start|> Title: CARMA: Contention-aware Auction-based Resource Management in Architecture
Abstract: CARMA: Contention-aware Auction-based Resource Management in Architecture: As the number of resources on chip multiprocessors (CMPs) increases, the complexity of how to best allocate these resources increases drastically. Because the higher number of applications makes the interaction and impacts of various memory levels more complex. Also, the selection of the objective function to define what \enquote{best} means for all applications is challenging. Memory-level parallelism (MLP) aware replacement algorithms in CMPs try to maximize the overall system performance or equalize each application's performance degradation due to sharing. However, depending on the selected \enquote{performance} metric, these algorithms are not efficiently implemented, because these centralized approaches mostly need some further information regarding about applications' need. In this paper, we propose a contention-aware game-theoretic resource management approach (CARMA) using market auction mechanism to find an optimal strategy for each application in a resource competition game. The applications learn through repeated interactions to choose their action on choosing the shared resources. Specifically, we consider two cases: (i) cache competition game, and (ii) main processor and co-processor congestion game. We enforce costs for each resource and derive bidding strategy. Accurate evaluation of the proposed approach show that our distributed allocation is scalable and outperforms the static and traditional approaches.
Introduction
\label{introduction}}
\IEEEPARstart{T}{he} number of cores on chip multiprocessors (\textit{CMP}) is increasing each year and it is believed that only many-core architectures can handle the massive parallel applications. Server-side \textit{CMP}s usually have more than 16 cores and potentially more than hundreds of applications can run on each server. These systems are going to be the future generation of the multi-core processor servers. Applications running on these systems share the same resources like last level cache (\textit{LLC}), interconnection network, memory controllers, off-chip memories or co-processors where the higher number of applications makes the interaction and impacts of various resource levels more complex. Along with the rapid growth of core integration, the performance of applications highly depend on the allocation of the resources and especially the \textit{contention} for shared resources <|cite_start|> (Reference: The impact of memory subsystem resource sharing on datacenter applications: In this paper we study the impact of sharing memory resources on five Google datacenter applications: a web search engine, bigtable, content analyzer, image stitching, and protocol buffer. While prior work has found neither positive nor negative effects from cache sharing across the PARSEC benchmark suite, we find that across these datacenter applications, there is both a sizable benefit and a potential degradation from improperly sharing resources. In this paper, we first present a study of the importance of thread-to-core mappings for applications in the datacenter as threads can be mapped to share or to not share caches and bus bandwidth. Second, we investigate the impact of co-locating threads from multiple applications with diverse memory behavior and discover that the best mapping for a given application changes depending on its co-runner. Third, we investigate the application characteristics that impact performance in the various thread-to-core mapping scenarios. Finally, we present both a heuristics-based and an adaptive approach to arrive at good thread-to-core decisions in the datacenter. We observe performance swings of up to 25% for web search and 40% for other key applications, simply based on how application threads are mapped to cores. By employing our adaptive thread-to-core mapper, the performance of the datacenter applications presented in this work improved by up to 22% over status quo thread-to-core mapping and performs within 3% of optimal.) <|cite_end|> <|cite_start|> (Reference: Addressing shared resource contention in multicore processors via scheduling: Contention for shared resources on multicore processors remains an unsolved problem in existing systems despite significant research efforts dedicated to this problem in the past. Previous solutions focused primarily on hardware techniques and software page coloring to mitigate this problem. Our goal is to investigate how and to what extent contention for shared resource can be mitigated via thread scheduling. Scheduling is an attractive tool, because it does not require extra hardware and is relatively easy to integrate into the system. Our study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling. The most difficult part of the problem is to find a classification scheme for threads, which would determine how they affect each other when competing for shared resources. We provide a comprehensive analysis of such classification schemes using a newly proposed methodology that enables to evaluate these schemes separately from the scheduling algorithm itself and to compare them to the optimal. As a result of this analysis we discovered a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware. To show the applicability of our analysis we design a new scheduling algorithm, which we prototype at user level, and demonstrate that it performs within 2\% of the optimal. We also conclude that the highest impact of contention-aware scheduling techniques is not in improving performance of a workload as a whole but in improving quality of service or performance isolation for individual applications.) <|cite_end|> <|cite_start|> (Reference: Communist, utilitarian, and capitalist cache policies on CMPs: caches as a shared resource: As chip multiprocessors (CMPs) become increasingly mainstream, architects have likewise become more interested in how best to share a cache hierarchy among multiple simultaneous threads of execution. The complexity of this problem is exacerbated as the number of simultaneous threads grows from two or four to the tens or hundreds. However, there is no consensus in the architectural community on what "best" means in this context. Some papers in the literature seek to equalize each thread's performance loss due to sharing, while others emphasize maximizing overall system performance. Furthermore, the specific effect of these goals varies depending on the metric used to define "performance". In this paper we label equal performance targets as Communist cache policies and overall performance targets as Utilitarian cache policies. We compare both of these models to the most common current model of a free-for-all cache (a Capitalist policy). We consider various performance metrics, including miss rates, bandwidth usage, and IPC, including both absolute and relative values of each metric. Using analytical models and behavioral cache simulation, we find that the optimal partitioning of a shared cache can vary greatly as different but reasonable definitions of optimality are applied. We also find that, although Communist and Utilitarian targets are generally compatible, each policy has workloads for which it provides poor overall performance or poor fairness, respectively. Finally, we find that simple policies like LRU replacement and static uniform partitioning are not sufficient to provide near-optimal performance under any reasonable definition, indicating that some thread-aware cache resource allocation mechanism is required.) <|cite_end|> <|cite_start|> (Reference: Fair cache sharing and partitioning in a chip multiprocessor architecture: This paper presents a detailed study of fairness in cache sharing between threads in a chip multiprocessor (CMP) architecture. Prior work in CMP architectures has only studied throughput optimization techniques for a shared cache. The issue of fairness in cache sharing, and its relation to throughput, has not been studied. Fairness is a critical issue because the operating system (OS) thread scheduler's effectiveness depends on the hardware to provide fair cache sharing to co-scheduled threads. Without such hardware, serious problems, such as thread starvation and priority inversion, can arise and render the OS scheduler ineffective. This paper makes several contributions. First, it proposes and evaluates five cache fairness metrics that measure the degree of fairness in cache sharing, and shows that two of them correlate very strongly with the execution-time fairness. Execution-time fairness is defined as how uniform the execution times of co-scheduled threads are changed, where each change is relative to the execution time of the same thread running alone. Secondly, using the metrics, the paper proposes static and dynamic L2 cache partitioning algorithms that optimize fairness. The dynamic partitioning algorithm is easy to implement, requires little or no profiling, has low overhead, and does not restrict the cache replacement algorithm to LRU. The static algorithm, although requiring the cache to maintain LRU stack information, can help the OS thread scheduler to avoid cache thrashing. Finally, this paper studies the relationship between fairness and throughput in detail. We found that optimizing fairness usually increases throughput, while maximizing throughput does not necessarily improve fairness. Using a set of co-scheduled pairs of benchmarks, on average our algorithms improve fairness by a factor of 4/spl times/, while increasing the throughput by 15%, compared to a nonpartitioned shared cache.) <|cite_end|> <|cite_start|> (Reference: Managing distributed, shared l2 caches through os-level page allocation: This paper presents and studies a distributed L2 cache management approach through OS-level page allocation for future many-core processors. L2 cache management is a crucial multicore processor design aspect to overcome non-uniform cache access latency for good program performance and to reduce on-chip network traffic and related power consumption. Unlike previously studied hardware-based private and shared cache designs implementing a "fixed" caching policy, the proposed OS-micro architecture approach is flexible; it can easily implement a wide spectrum of L2 caching policies without complex hardware support. Furthermore, our approach can provide differentiated execution environment to running programs by dynamically controlling data placement and cache sharing degrees. We discuss key design issues of the proposed approach and present preliminary experimental results showing the promise of our approach) <|cite_end|> <|cite_start|> (Reference: CAGE: A Contention-Aware Game-Theoretic Model for Heterogeneous Resource Assignment: Traditional resource management systems rely on a centralized approach to manage users running on each resource. The centralized resource management system is not scalable for large-scale servers as the number of users running on shared resources is increasing dramatically and the centralized manager may not have enough information about applications' need. In this paper we propose a distributed game-theoretic resource management approach using market auction mechanism to find optimal strategy in a resource competition game. The applications learn through repeated interactions to choose their action on choosing the shared resources. Specifically, we look into two case studies of cache competition game and main processor and co-processor congestion game. We enforce costs for each resource and derive bidding strategy. Accurate evaluation of the proposed approach show that our distributed allocation is scalable and outperforms the static and traditional approaches.) <|cite_end|> <|cite_start|> (Reference: Stochastic modeling and optimization of stragglers: MapReduce framework is widely used to parallelize batch jobs since it exploits a high degree of multi-tasking to process them. However, it has been observed that when the number of servers increases, the map phase can take much longer than expected. This paper analytically shows that the stochastic behavior of the servers has a negative effect on the completion time of a MapReduce job, and continuously increasing the number of servers without accurate scheduling can degrade the overall performance. We analytically model the map phase in terms of hardware, system, and application parameters to capture the effects of stragglers on the performance. Mean sojourn time (MST), the time needed to sync the completed tasks at a reducer, is introduced as a performance metric and mathematically formulated. Following that, we stochastically investigate the optimal task scheduling which leads to an equilibrium property in a datacenter with different types of servers. Our experimental results show the performance of the different types of schedulers targeting MapReduce applications. We also show that, in the case of mixed deterministic and stochastic schedulers, there is an optimal scheduler that can always achieve the lowest MST.) <|cite_end|> <|cite_start|> (Reference: Optimal Placement of Cores, Caches and Memory Controllers in Network On-Chip: Parallel programming is emerging fast and intensive applications need more resources, so there is a huge demand for on-chip multiprocessors. Accessing L1 caches beside the cores are the fastest after registers but the size of private caches cannot increase because of design, cost and technology limits. Then split I-cache and D-cache are used with shared LLC (last level cache). For a unified shared LLC, bus interface is not scalable, and it seems that distributed shared LLC (DSLLC) is a better choice. Most of papers assume a distributed shared LLC beside each core in on-chip network. Many works assume that DSLLCs are placed in all cores; however, we will show that this design ignores the effect of traffic congestion in on-chip network. In fact, our work focuses on optimal placement of cores, DSLLCs and even memory controllers to minimize the expected latency based on traffic load in a mesh on-chip network with fixed number of cores and total cache capacity. We try to do some analytical modeling deriving intended cost function and then optimize the mean delay of the on-chip network communication. This work is supposed to be verified using some traffic patterns that are run on CSIM simulator.) <|cite_end|> <|cite_start|> (Reference: Evaluating the combined impact of node architecture and cloud workload characteristics on network traffic and performance/cost: The combined impact of node architecture and workload characteristics on off-chip network traffic with performance/cost analysis has not been investigated before in the context of emerging cloud applications. Motivated by this observation, this paper performs a thorough characterization of twelve cloud workloads using a full-system datacenter simulation infrastructure. We first study the inherent network characteristics of emerging cloud applications including message inter-arrival times, packet sizes, inter-node communication overhead, self-similarity, and traffic volume. Then, we study the effect of hardware architectural metrics on network traffic. Our experimental analysis reveals that (1) the message arrival times and packet-size distributions exhibit variances across different cloud applications, (2) the inter-arrival times imply a large amount of self-similarity as the number of nodes increase, (3) the node architecture can play a significant role in shaping the overall network traffic, and finally, (4) the applications we study can be broadly divided into those which perform better in a scale-out or scale-up configuration at node level and into two categories, namely, those that have long-duration, low-burst flows and those that have short-duration, high-burst flows. Using the results of (3) and (4), the paper discusses the performance/cost trade-offs for scale-out and scale-up approaches and proposes an analytical model that can be used to predict the communication and computation demand for different configurations. It is shown that the difference between two different node architecture's performance per dollar cost (under same number of cores system wide) can be as high as 154 percent which disclose the need for accurate characterization of cloud applications before wasting the precious cloud resources by allocating wrong architecture. The results of this study can be used for system modeling, capacity planning and managing heterogeneous resources for large-scale system designs.) <|cite_end|> <|cite_start|> (Reference: Towards Stochastically Optimizing Data Computing Flows: With rapid growth in the amount of unstructured data produced by memory-intensive applications, large scale data analytics has recently attracted increasing interest. Processing, managing and analyzing this huge amount of data poses several challenges in cloud and data center computing domain. Especially, conventional frameworks for distributed data analytics are based on the assumption of homogeneity and non-stochastic distribution of different data-processing nodes. The paper argues the fundamental limiting factors for scaling big data computation. It is shown that as the number of series and parallel computing servers increase, the tail (mean and variance) of the job execution time increase. We will first propose a model to predict the response time of highly distributed processing tasks and then propose a new practical computational algorithm to optimize the response time.) <|cite_end|> <|cite_start|> (Reference: evaluating cloud workload characteristics: ) <|cite_end|>. In particular, as the number of co-runners running on a shared resource increases, the magnitude of performance degradation increases. Also, the selection of the objective function to define what \enquote{best} means for all applications is challenging or even theoretically impossible to improve IPC of one application and memory latency of another application simultaneously in a system. As a result, this new architectural paradigm introduces several new challenges in terms of scalability of resource management and assignment on these large-scale servers. Therefore, a scalable competition method between applications to reach the optimal assignment can significantly improve the performance of co-runners on a shared resource. Figure~\ref{fig:Slow_down} shows an example of performance degradation for 10 \textit{spec 2006} applications when running on a shared 10MB \textit{LLC} (Shared), or when running on a private 1MB \textit{LLC} (Separate). \\
\begin{figure}[!tb]
\centering
\includegraphics[height=1.5in, width=3.5in]{Images/Perf.pdf}
\vspace{-1.5\baselineskip}
\caption{Performance degradation of 10 different \textit{spec 2006} applications sharing \textit{LLC}.}
\label{fig:Slow_down}
\vspace{-1.2\baselineskip}
\end{figure}
\indent Among these shared resources, sharing \textit{CPU}s and \textit{LLC}s plays an important role in overall CMP utilization and performance. Modern \textit{CMP}s are moving towards heterogeneous architecture designs where one can get advantage of both small number of high performance \textit{CPU}s or higher number of low performance cores. The advent \textit{Intel Xeon Phi} co-processors is an example of such heterogeneous architectures that during run-time the programmer can decide to run any part of the code on small number of \textit{Xeon} processors or higher number of \textit{Xeon Phi} co-processors. Therefore, the burden of making decisions on getting the shared resources is moving towards the applications. In addition to the shared \textit{CPU}s, shared \textit{LLC} keeps data on chip and reduces off-chip communication costs <|cite_start|> (Reference: Organizing the last line of defense before hitting the memory wall for cmps: The last line of defense in the cache hierarchy before going to off-chip memory is very critical in chip multiprocessors (CMPs) from both the performance and power perspectives. We investigate different organizations for this last line of defense (assumed to be L2 in this article) towards reducing off-chip memory accesses. We evaluate the trade-offs between private L2 and address-interleaved shared L2 designs, noting their individual benefits and drawbacks. The possible imbalance between the L2 demands across the CPUs favors a shared L2 organization, while the interference between these demands can favor a private L2 organization. We propose a new architecture, called Shared Processor-Based Split L2, that captures the benefits of these two organizations, while avoiding many of their drawbacks. Using several applications from the SPEC OMP suite and a commercial benchmark, Specjbb, on a complete system simulator, we demonstrate the benefits of this shared processor-based L2 organization. Our results show as much as 42.50% improvement in IPC over the private organization (with 11.52% on the average), and as much as 42.22% improvement over the shared interleaved organization (with 9.76% on the average).) <|cite_end|>. Sometimes an application may flood on a cache and occupy a large portion of available memory and hurt performance of another application which rarely loads on memory, but its accesses are usually latency-sensitive. Recently, many proposals target partitioning the cache space between applications such that (1) each application gets the minimum required space, so that per-application performance is guaranteed to be at an acceptable level, (2) system performance is improved by deciding how the remaining space should be allocated to each one. \\
\indent Prior schemes <|cite_start|> (Reference: Addressing shared resource contention in multicore processors via scheduling: Contention for shared resources on multicore processors remains an unsolved problem in existing systems despite significant research efforts dedicated to this problem in the past. Previous solutions focused primarily on hardware techniques and software page coloring to mitigate this problem. Our goal is to investigate how and to what extent contention for shared resource can be mitigated via thread scheduling. Scheduling is an attractive tool, because it does not require extra hardware and is relatively easy to integrate into the system. Our study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling. The most difficult part of the problem is to find a classification scheme for threads, which would determine how they affect each other when competing for shared resources. We provide a comprehensive analysis of such classification schemes using a newly proposed methodology that enables to evaluate these schemes separately from the scheduling algorithm itself and to compare them to the optimal. As a result of this analysis we discovered a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware. To show the applicability of our analysis we design a new scheduling algorithm, which we prototype at user level, and demonstrate that it performs within 2\% of the optimal. We also conclude that the highest impact of contention-aware scheduling techniques is not in improving performance of a workload as a whole but in improving quality of service or performance isolation for individual applications.) <|cite_end|> <|cite_start|> (Reference: Utility-based cache partitioning: A low-overhead, high-performance, runtime mechanism to partition shared caches: This paper investigates the problem of partitioning a shared cache between multiple concurrently executing applications. The commonly used LRU policy implicitly partitions a shared cache on a demand basis, giving more cache resources to the application that has a high demand and fewer cache resources to the application that has a low demand. However, a higher demand for cache resources does not always correlate with a higher performance from additional cache resources. It is beneficial for performance to invest cache resources in the application that benefits more from the cache resources rather than in the application that has more demand for the cache resources. This paper proposes utility-based cache partitioning (UCP), a low-overhead, runtime mechanism that partitions a shared cache between multiple applications depending on the reduction in cache misses that each application is likely to obtain for a given amount of cache resources. The proposed mechanism monitors each application at runtime using a novel, cost-effective, hardware circuit that requires less than 2kB of storage. The information collected by the monitoring circuits is used by a partitioning algorithm to decide the amount of cache resources allocated to each application. Our evaluation, with 20 multiprogrammed workloads, shows that UCP improves performance of a dual-core system by up to 23% and on average 11% over LRU-based cache partitioning) <|cite_end|> <|cite_start|> (Reference: Gaining Insights into Multicore Cache Partitioning: Bridging the Gap
between Simulation and Real Systems: Cache partitioning and sharing is critical to the effective utilization of multicore processors. However, almost all existing studies have been evaluated by simulation that often has several limitations, such as excessive simulation time, absence of OS activities and proneness to simulation inaccuracy. To address these issues, we have taken an efficient software approach to supporting both static and dynamic cache partitioning in OS through memory address mapping. We have comprehensively evaluated several representative cache partitioning schemes with different optimization objectives, including performance, fairness, and quality of service (QoS). Our software approach makes it possible to run the SPEC CPU2006 benchmark suite to completion. Besides confirming important conclusions from previous work, we are able to gain several insights from whole-program executions, which are infeasible from simulation. For example, giving up some cache space in one program to help another one may improve the performance of both programs for certain workloads due to reduced contention for memory bandwidth. Our evaluation of previously proposed fairness metrics is also significantly different from a simulation-based study. The contributions of this study are threefold. (1) To the best of our knowledge, this is a highly comprehensive execution- and measurement-based study on multicore cache partitioning. This paper not only confirms important conclusions from simulation-based studies, but also provides new insights into dynamic behaviors and interaction effects. (2) Our approach provides a unique and efficient option for evaluating multicore cache partitioning. The implemented software layer can be used as a tool in multicore performance evaluation and hardware design. (3) The proposed schemes can be further refined for OS kernels to improve performance.) <|cite_end|> <|cite_start|> (Reference: CQoS: A framework for enabling qos in shared caches of {CMP} platforms: Cache hierarchies have been traditionally designed for usage by a single application, thread or core. As multi-threaded (MT) and multi-core (CMP) platform architectures emerge and their workloads range from single-threaded and multithreaded applications to complex virtual machines (VMs), a shared cache resource will be consumed by these different entities generating heterogeneous memory access streams exhibiting different locality properties and varying memory sensitivity. As a result, conventional cache management approaches that treat all memory accesses equally are bound to result in inefficient space utilization and poor performance even for applications with good locality properties. To address this problem, this paper presents a new cache management framework (CQoS) that (1) recognizes the heterogeneity in memory access streams, (2) introduces the notion of QoS to handle the varying degrees of locality and latency sensitivity and (3) assigns and enforces priorities to streams based on latency sensitivity, locality degree and application performance needs. To achieve this, we propose CQoS options for priority classification, priority assignment and priority enforcement. We briefly describe CQoS priority classification and assignment options -- ranging from user-driven and developer-driven to compiler-detected and flow-based approaches. Our focus in this paper is on CQoS mechanisms for priority enforcement -- these include (1) selective cache allocation, (2) static/dynamic set partitioning and (3) heterogeneous cache regions. We discuss the architectural design and implementation complexity of these CQoS options. To evaluate the performance trade-offs for these options, we have modeled these CQoS options in a cache simulator and evaluated their performance in CMP platforms running network-intensive server workloads. Our simulation results show the effectiveness of our proposed options and make the case for CQoS in future multi-threaded/multi-core platforms since it improves shared cache efficiency and increases overall system performance as a result.) <|cite_end|> <|cite_start|> (Reference: Organizing the last line of defense before hitting the memory wall for cmps: The last line of defense in the cache hierarchy before going to off-chip memory is very critical in chip multiprocessors (CMPs) from both the performance and power perspectives. We investigate different organizations for this last line of defense (assumed to be L2 in this article) towards reducing off-chip memory accesses. We evaluate the trade-offs between private L2 and address-interleaved shared L2 designs, noting their individual benefits and drawbacks. The possible imbalance between the L2 demands across the CPUs favors a shared L2 organization, while the interference between these demands can favor a private L2 organization. We propose a new architecture, called Shared Processor-Based Split L2, that captures the benefits of these two organizations, while avoiding many of their drawbacks. Using several applications from the SPEC OMP suite and a commercial benchmark, Specjbb, on a complete system simulator, we demonstrate the benefits of this shared processor-based L2 organization. Our results show as much as 42.50% improvement in IPC over the private organization (with 11.52% on the average), and as much as 42.22% improvement over the shared interleaved organization (with 9.76% on the average).) <|cite_end|> <|cite_start|> (Reference: Architectural support for operating system-driven cmp cache management: The role of the operating system (OS) in managing shared resources such as CPU time, memory, peripherals, and even energy is well motivated and understood [23]. Unfortunately, one key resource — lower-level shared cache in chip multi-processors — is commonly managed purely in hardware by rudimentary replacement policies such as least-recently-used (LRU). The rigid nature of the hardware cache management policy poses a serious problem since there is no single best cache management policy across all sharing scenarios. For example, the cache management policy for a scenario where applications from a single organization are running under "best effort" performance expectation is likely to be different from the policy for a scenario where applications from competing business entities (say, at a third party data center) are running under a minimum service level expectation. When it comes to managing shared caches, there is an inherent tension between flexibility and performance. On one hand, managing the shared cache in the OS offers immense policy flexibility since it may be implemented in software. Unfortunately, it is prohibitively expensive in terms of performance for the OS to be involved in managing temporally fine-grain events such as cache allocation. On the other hand, sophisticated hardware-only cache management techniques to achieve fair sharing or throughput maximization have been proposed. But they offer no policy flexibility. This paper addresses this problem by designing architectural support for OS to efficiently manage shared caches with a wide variety of policies. Our scheme consists of a hardware cache quota management mechanism, an OS interface and a set of OS level quota orchestration policies. The hardware mechanism guarantees that OS-specified quotas are enforced in shared caches, thus eliminating the need for (and the performance penalty of) temporally fine-grained OS intervention. The OS retains policy flexibility since it can tune the quotas during regularly scheduled OS interventions. We demonstrate that our scheme can support a wide range of policies including policies that provide (a) passive performance differentiation, (b) reactive fairness by miss-rate equalization and (c) reactive performance differentiation.) <|cite_end|> <|cite_start|> (Reference: Analysis and approximation of optimal co-scheduling on chip multiprocessors: Cache sharing among processors is important for Chip Multiprocessors to reduce inter-thread latency, but also brings cache contention, degrading program performance considerably. Recent studies have shown that job co-scheduling can effectively alleviate the contention, but it remains an open question how to efficiently find optimal co-schedules. Solving the question is critical for determining the potential of a co-scheduling system. This paper presents a theoretical analysis of the complexity of co-scheduling, proving its NP-completeness. Furthermore, for a special case when there are two sharers per chip, we propose an algorithm that finds the optimal co-schedules in polynomial time. For more complex cases, we design and evaluate a sequence of approximation algorithms, among which, the hierarchical matching algorithm produces near-optimal schedules and shows good scalability. This study facilitates the evaluation of co-scheduling systems, as well as offers some techniques directly usable in proactive job co-scheduling.) <|cite_end|> are marching towards these two goals, usually by trading off the system complexity and maximum system utilization. It is shown that neither a pure private \textit{LLC}, nor a pure shared \textit{LLC}, can provide optimal performance for different workloads <|cite_start|> (Reference: Managing distributed, shared l2 caches through os-level page allocation: This paper presents and studies a distributed L2 cache management approach through OS-level page allocation for future many-core processors. L2 cache management is a crucial multicore processor design aspect to overcome non-uniform cache access latency for good program performance and to reduce on-chip network traffic and related power consumption. Unlike previously studied hardware-based private and shared cache designs implementing a "fixed" caching policy, the proposed OS-micro architecture approach is flexible; it can easily implement a wide spectrum of L2 caching policies without complex hardware support. Furthermore, our approach can provide differentiated execution environment to running programs by dynamically controlling data placement and cache sharing degrees. We discuss key design issues of the proposed approach and present preliminary experimental results showing the promise of our approach) <|cite_end|>. In general, cache partitioning techniques can be divided into way partitioning and co-scheduling techniques. In a set-associative cache, partitioning is done by per-way allocation. For example, in a 4-way 512KB shared cache allocating 128KB to application \textit{A} means to allow it storing data blocks in only one way per-set, without accessing remaining. Co-scheduling techniques try to co-schedule a set of applications with lowest interference together at the same time such that the magnitude of slow-down for each application is the same or a performance metric is optimized for all applications. However, it is shown that, depending on the objective function for the performance metric, cache allocation can result in totally different allocations <|cite_start|> (Reference: Communist, utilitarian, and capitalist cache policies on CMPs: caches as a shared resource: As chip multiprocessors (CMPs) become increasingly mainstream, architects have likewise become more interested in how best to share a cache hierarchy among multiple simultaneous threads of execution. The complexity of this problem is exacerbated as the number of simultaneous threads grows from two or four to the tens or hundreds. However, there is no consensus in the architectural community on what "best" means in this context. Some papers in the literature seek to equalize each thread's performance loss due to sharing, while others emphasize maximizing overall system performance. Furthermore, the specific effect of these goals varies depending on the metric used to define "performance". In this paper we label equal performance targets as Communist cache policies and overall performance targets as Utilitarian cache policies. We compare both of these models to the most common current model of a free-for-all cache (a Capitalist policy). We consider various performance metrics, including miss rates, bandwidth usage, and IPC, including both absolute and relative values of each metric. Using analytical models and behavioral cache simulation, we find that the optimal partitioning of a shared cache can vary greatly as different but reasonable definitions of optimality are applied. We also find that, although Communist and Utilitarian targets are generally compatible, each policy has workloads for which it provides poor overall performance or poor fairness, respectively. Finally, we find that simple policies like LRU replacement and static uniform partitioning are not sufficient to provide near-optimal performance under any reasonable definition, indicating that some thread-aware cache resource allocation mechanism is required.) <|cite_end|>. In general Prior schemes have the following three limitations:\\
\indent \textbf{1. Scalability}: All of the prior schemes suffer from scalability; especially when the approach is tracking the application's dynamism <|cite_start|> (Reference: Utility-based cache partitioning: A low-overhead, high-performance, runtime mechanism to partition shared caches: This paper investigates the problem of partitioning a shared cache between multiple concurrently executing applications. The commonly used LRU policy implicitly partitions a shared cache on a demand basis, giving more cache resources to the application that has a high demand and fewer cache resources to the application that has a low demand. However, a higher demand for cache resources does not always correlate with a higher performance from additional cache resources. It is beneficial for performance to invest cache resources in the application that benefits more from the cache resources rather than in the application that has more demand for the cache resources. This paper proposes utility-based cache partitioning (UCP), a low-overhead, runtime mechanism that partitions a shared cache between multiple applications depending on the reduction in cache misses that each application is likely to obtain for a given amount of cache resources. The proposed mechanism monitors each application at runtime using a novel, cost-effective, hardware circuit that requires less than 2kB of storage. The information collected by the monitoring circuits is used by a partitioning algorithm to decide the amount of cache resources allocated to each application. Our evaluation, with 20 multiprogrammed workloads, shows that UCP improves performance of a dual-core system by up to 23% and on average 11% over LRU-based cache partitioning) <|cite_end|> <|cite_start|> (Reference: Addressing shared resource contention in multicore processors via scheduling: Contention for shared resources on multicore processors remains an unsolved problem in existing systems despite significant research efforts dedicated to this problem in the past. Previous solutions focused primarily on hardware techniques and software page coloring to mitigate this problem. Our goal is to investigate how and to what extent contention for shared resource can be mitigated via thread scheduling. Scheduling is an attractive tool, because it does not require extra hardware and is relatively easy to integrate into the system. Our study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling. The most difficult part of the problem is to find a classification scheme for threads, which would determine how they affect each other when competing for shared resources. We provide a comprehensive analysis of such classification schemes using a newly proposed methodology that enables to evaluate these schemes separately from the scheduling algorithm itself and to compare them to the optimal. As a result of this analysis we discovered a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware. To show the applicability of our analysis we design a new scheduling algorithm, which we prototype at user level, and demonstrate that it performs within 2\% of the optimal. We also conclude that the highest impact of contention-aware scheduling techniques is not in improving performance of a workload as a whole but in improving quality of service or performance isolation for individual applications.) <|cite_end|> <|cite_start|> (Reference: Analysis and approximation of optimal co-scheduling on chip multiprocessors: Cache sharing among processors is important for Chip Multiprocessors to reduce inter-thread latency, but also brings cache contention, degrading program performance considerably. Recent studies have shown that job co-scheduling can effectively alleviate the contention, but it remains an open question how to efficiently find optimal co-schedules. Solving the question is critical for determining the potential of a co-scheduling system. This paper presents a theoretical analysis of the complexity of co-scheduling, proving its NP-completeness. Furthermore, for a special case when there are two sharers per chip, we propose an algorithm that finds the optimal co-schedules in polynomial time. For more complex cases, we design and evaluate a sequence of approximation algorithms, among which, the hierarchical matching algorithm produces near-optimal schedules and shows good scalability. This study facilitates the evaluation of co-scheduling systems, as well as offers some techniques directly usable in proactive job co-scheduling.) <|cite_end|>. The reason is that algorithm complexity becomes higher in dynamic approaches. The root cause of this complexity is that all previous techniques make decisions (cache partitioning, co-scheduling) centralized using a central hardware or software. For example, main algorithm of <|cite_start|> (Reference: Utility-based cache partitioning: A low-overhead, high-performance, runtime mechanism to partition shared caches: This paper investigates the problem of partitioning a shared cache between multiple concurrently executing applications. The commonly used LRU policy implicitly partitions a shared cache on a demand basis, giving more cache resources to the application that has a high demand and fewer cache resources to the application that has a low demand. However, a higher demand for cache resources does not always correlate with a higher performance from additional cache resources. It is beneficial for performance to invest cache resources in the application that benefits more from the cache resources rather than in the application that has more demand for the cache resources. This paper proposes utility-based cache partitioning (UCP), a low-overhead, runtime mechanism that partitions a shared cache between multiple applications depending on the reduction in cache misses that each application is likely to obtain for a given amount of cache resources. The proposed mechanism monitors each application at runtime using a novel, cost-effective, hardware circuit that requires less than 2kB of storage. The information collected by the monitoring circuits is used by a partitioning algorithm to decide the amount of cache resources allocated to each application. Our evaluation, with 20 multiprogrammed workloads, shows that UCP improves performance of a dual-core system by up to 23% and on average 11% over LRU-based cache partitioning) <|cite_end|> has exponential complexity $O( \binom{N+K-1}{K-1} )$ where $N$ is the number of applications sharing \textit{LLC} and $K$ is the number of ways. Table~\ref{table:complexity} shows the state of the art cache partitioning algorithms and their complexity of checking performance of different permutations.\\
\indent \textbf{2. Static-based}: Most of the prior works, use static co-scheduling to degrade slow-down of co-running applications on the same shared cache. However, static-based approaches cannot catch dynamic behavior of applications. Figure~\ref{fig:Dynamic} shows an example of two applications' IPC (\textit{hmmer} and \textit{mcf}) from \textit{Spec 2006} under different \textit{LLC} sizes. Let us consider a case where we have two cache sizes, a large cache of 1MB which can be shared between applications, and two private caches of 512KB which are not shared. The two applications are competing for the cache space. Suppose that both applications have two phases $(0,T)$ and $(T,2T)$. If the first application gets the larger cache space its \textit{IPC} increases by 35 percent in the first phase and by 20.6 percent in the second phase. The second application's \textit{IPC} increases by 15 percent in the first phase and by 36.84 percent in the second phase if it gets the larger cache space. In a static-based scheduling approach, the larger \textit{LLC} is always allocated to the first application with higher IPC in the time interval $(0, 2T)$, but in \textit{CARMA}, the applications compete for the shared resources, and in the first phase, the larger \textit{LLC} is allocated to the first application and in the second phase \textit{CARMA} allocates the larger \textit{LLC} to the second application. Therefore, static-based approaches cannot capture the dynamism in application's behavior and ultimately degrade the performance significantly. \\
\indent \textbf{3. Fairness}: Defining a single parameter for fairness is challenging for multiple applications, since applications have different performance benefits from each resource during each phase. In prior works fairness has been defined as a unique metric (eg. IPC, Power, Weighted Speed-up) for all applications. Therefore, in current approaches, the optimization goal of algorithms is the same for all applications. Consequently, we cannot sum up applications that desire different metrics in the same platform to decide on. However, if one application needs better IPC and another requires lower energy, the previous algorithms are not able to model it. The only way to address diversity of metrics (to be optimized) is to have an appropriate translation between different metrics (eg. IPC to Power) that is not trivial, while not addressed in prior study.\\
\begin{table}[!tb]
\centering
\caption{\label{table:complexity} Complexity comparison of state-of-the-art \textit{LLC} partitioning/co-scheduling algorithms.}
\begin{tabular}{|c|c|}
\hline Algorithm & Search Space\\
\hline Utility-based main algorithm <|cite_start|> (Reference: Utility-based cache partitioning: A low-overhead, high-performance, runtime mechanism to partition shared caches: This paper investigates the problem of partitioning a shared cache between multiple concurrently executing applications. The commonly used LRU policy implicitly partitions a shared cache on a demand basis, giving more cache resources to the application that has a high demand and fewer cache resources to the application that has a low demand. However, a higher demand for cache resources does not always correlate with a higher performance from additional cache resources. It is beneficial for performance to invest cache resources in the application that benefits more from the cache resources rather than in the application that has more demand for the cache resources. This paper proposes utility-based cache partitioning (UCP), a low-overhead, runtime mechanism that partitions a shared cache between multiple applications depending on the reduction in cache misses that each application is likely to obtain for a given amount of cache resources. The proposed mechanism monitors each application at runtime using a novel, cost-effective, hardware circuit that requires less than 2kB of storage. The information collected by the monitoring circuits is used by a partitioning algorithm to decide the amount of cache resources allocated to each application. Our evaluation, with 20 multiprogrammed workloads, shows that UCP improves performance of a dual-core system by up to 23% and on average 11% over LRU-based cache partitioning) <|cite_end|> & $ \binom{N+K-1}{N-1}$ \\
\hline \pbox{20cm}{Greedy Co-scheduling <|cite_start|> (Reference: Analysis and approximation of optimal co-scheduling on chip multiprocessors: Cache sharing among processors is important for Chip Multiprocessors to reduce inter-thread latency, but also brings cache contention, degrading program performance considerably. Recent studies have shown that job co-scheduling can effectively alleviate the contention, but it remains an open question how to efficiently find optimal co-schedules. Solving the question is critical for determining the potential of a co-scheduling system. This paper presents a theoretical analysis of the complexity of co-scheduling, proving its NP-completeness. Furthermore, for a special case when there are two sharers per chip, we propose an algorithm that finds the optimal co-schedules in polynomial time. For more complex cases, we design and evaluate a sequence of approximation algorithms, among which, the hierarchical matching algorithm produces near-optimal schedules and shows good scalability. This study facilitates the evaluation of co-scheduling systems, as well as offers some techniques directly usable in proactive job co-scheduling.) <|cite_end|>\\ $N$ applications and $N/K$ caches} & $ \binom{N}{K} $ \\
\hline \pbox{20cm}{Hierarchical perfect matching <|cite_start|> (Reference: Analysis and approximation of optimal co-scheduling on chip multiprocessors: Cache sharing among processors is important for Chip Multiprocessors to reduce inter-thread latency, but also brings cache contention, degrading program performance considerably. Recent studies have shown that job co-scheduling can effectively alleviate the contention, but it remains an open question how to efficiently find optimal co-schedules. Solving the question is critical for determining the potential of a co-scheduling system. This paper presents a theoretical analysis of the complexity of co-scheduling, proving its NP-completeness. Furthermore, for a special case when there are two sharers per chip, we propose an algorithm that finds the optimal co-schedules in polynomial time. For more complex cases, we design and evaluate a sequence of approximation algorithms, among which, the hierarchical matching algorithm produces near-optimal schedules and shows good scalability. This study facilitates the evaluation of co-scheduling systems, as well as offers some techniques directly usable in proactive job co-scheduling.) <|cite_end|> \\ $N$ applications} & $N^4 $ \\
\hline \pbox{20cm}{Local optimization <|cite_start|> (Reference: Analysis and approximation of optimal co-scheduling on chip multiprocessors: Cache sharing among processors is important for Chip Multiprocessors to reduce inter-thread latency, but also brings cache contention, degrading program performance considerably. Recent studies have shown that job co-scheduling can effectively alleviate the contention, but it remains an open question how to efficiently find optimal co-schedules. Solving the question is critical for determining the potential of a co-scheduling system. This paper presents a theoretical analysis of the complexity of co-scheduling, proving its NP-completeness. Furthermore, for a special case when there are two sharers per chip, we propose an algorithm that finds the optimal co-schedules in polynomial time. For more complex cases, we design and evaluate a sequence of approximation algorithms, among which, the hierarchical matching algorithm produces near-optimal schedules and shows good scalability. This study facilitates the evaluation of co-scheduling systems, as well as offers some techniques directly usable in proactive job co-scheduling.) <|cite_end|> \\ $N$ applications and $N/K$ caches} & ${(N/K)}^2 \binom{2K}{K} $\\
\hline \pbox{20cm}{CARMA \\ $N$ applications and $K$ resources} & ${O(NK)}$\\
\hline
\end{tabular}
\vspace{-1\baselineskip}
\end{table}
\begin{figure}[!b]
\vspace{-1.5\baselineskip}
\centering
\includegraphics[height=1.5in, width=3.5in]{Images/Dynamic.pdf}
\vspace{-2\baselineskip}
\caption{\label{fig:Dynamic} Performance comparison of static and dynamic scheduling of two applications (\textit{hmmer} and \textit{mcf} from \textit{Spec 2006}) under two different \textit{LLC} sizes.}
\end{figure}
\indent In this paper, we present a game-theoretic resource assignment method to address all the above shortcomings including scalability, dynamism and fairness, while applications can get their desired performance based on their utility functions.\\
\indent \textbf{1. Semi-Decentralized}: Dual of each centralized problem is decentralized, if the optimization goal is broken into a smaller meaningful sub-problems. In the context of heterogeneous resource assignment this is straightforward. The profiling, analyzing and evaluating the demands are on application side, but the final decision on assigning the resources to applications based on the applications' bids is easily performed by the OS while they compete with each other for the best assignment. Like a capitalist system, the complexity of the governing transfers to the independent entities, and the government just make the policies and the final decisions. To achieve this, we introduce a novel market-based approach. Roughly speaking, the complexity of our approach in worst case scenario (for each application) is $O(NK)$ where $N$ is the number of the applications and $K$ is the number of available resources. However, on average the auction terminates in less than $N/2$ iterations.\\
\indent \textbf{2. Dynamic}: In order to confront the scalability problem of previous approaches, we use a market-based approach to move the decision making to the individual applications. Iterative auctions have been designed to solve non-trivial resource allocation problems with low complexity cost in government sale of resources, \textit{eBay}, real estate sales and stock market. Similarly, decentralized computation complexity is lower than centralized (for each application) which provides the opportunity to make the decision revisiting the allocation in small time quantum, or when a new application leaves or comes into the system.\\
\indent \textbf{3. Fair}: The proposed method solves the heterogeneous resource assignment problem in the context of marketing. Applications' demand regardless of the global optimization objective (IPC, Power, etc.) translates to the true valuation of their own performance. Resource assignment to the applications with the highest bids is performed by the auctioneer (the OS); making it local optimization objectives. Hence, resource assignment can be performed for different applications with different objectives known as utility functions.\\
\indent Overall, the proposed approach for cache contention game on average brings in 33.6\% improvement in system performance (when running 16 applications) compared to shared \textit{LLC}; while reaching less than 11.1\% of the maximum achievable performance in the best dynamic scheme. In the case study of heterogeneous CPU assignment, it brings in 106.6\% improvement (when running 16 applications at the same time). Also, the performance improvement increases even more as the number of co-running applications increases in the system. \\
\indent \textbf{Other potentials}: We introduce an auction-based resource management approach for different applications in large-scale competition games. In short, we as a system owner pay for a high-end CMP system for servers and guarantee that each application/user takes its best from the system by paying us back, or we as an application owner bid/pay the system to get the resources for my best performance. The auctioneer is application-agnostic, and does not interfere with applications' profile to globally optimize the system, but the applications compete for their own improvement. The two case studies of cache partitioning and CPU sharing are examples for resource sharing and the proposed approach can be employed in other resource partitioning algorithms. \\
\indent The reminder of the paper is organized as follows. Section~\ref{Motivation} discusses the background and motivation behind this work. In section~\ref{Problem_definition}, we discuss our auction-based game model. Section~\ref{Case_Studies} discusses the case study of cache contention game and the case study of main processor and co-processor contention and simulation results. Section~\ref{Related_works} studies related works and Section~\ref{Conclusion} concludes the paper with a summary.
\vspace{-0.5\baselineskip}
Related Work
\label{Related_works}
With rapid improvement in computer technology, more and more cores are embedded in a single chip and applications competing for a shared resource is becoming common. On the one hand, managing scheduling of shared resources for a large number of applications is challenging in a sense that the operating system doesn't know what is the performance metric for each application. But on the other hand, the operating system has a global view of the whole state of the system and can guide applications on choosing the shared resources.\\
\indent There have been several works, for managing the shared cache in multi-core systems. Qureshi et al. <|cite_start|> (Reference: Utility-based cache partitioning: A low-overhead, high-performance, runtime mechanism to partition shared caches: This paper investigates the problem of partitioning a shared cache between multiple concurrently executing applications. The commonly used LRU policy implicitly partitions a shared cache on a demand basis, giving more cache resources to the application that has a high demand and fewer cache resources to the application that has a low demand. However, a higher demand for cache resources does not always correlate with a higher performance from additional cache resources. It is beneficial for performance to invest cache resources in the application that benefits more from the cache resources rather than in the application that has more demand for the cache resources. This paper proposes utility-based cache partitioning (UCP), a low-overhead, runtime mechanism that partitions a shared cache between multiple applications depending on the reduction in cache misses that each application is likely to obtain for a given amount of cache resources. The proposed mechanism monitors each application at runtime using a novel, cost-effective, hardware circuit that requires less than 2kB of storage. The information collected by the monitoring circuits is used by a partitioning algorithm to decide the amount of cache resources allocated to each application. Our evaluation, with 20 multiprogrammed workloads, shows that UCP improves performance of a dual-core system by up to 23% and on average 11% over LRU-based cache partitioning) <|cite_end|> showed that assigning more cache space to applications with more cache utility does not always lead to better performance since there exist applications with very low cache reuse which may have very high cache utilization. \\
\indent Several software and hardware approaches have been proposed to find the optimal partitioning of cache space for different applications <|cite_start|> (Reference: Addressing shared resource contention in multicore processors via scheduling: Contention for shared resources on multicore processors remains an unsolved problem in existing systems despite significant research efforts dedicated to this problem in the past. Previous solutions focused primarily on hardware techniques and software page coloring to mitigate this problem. Our goal is to investigate how and to what extent contention for shared resource can be mitigated via thread scheduling. Scheduling is an attractive tool, because it does not require extra hardware and is relatively easy to integrate into the system. Our study is the first to provide a comprehensive analysis of contention-mitigating techniques that use only scheduling. The most difficult part of the problem is to find a classification scheme for threads, which would determine how they affect each other when competing for shared resources. We provide a comprehensive analysis of such classification schemes using a newly proposed methodology that enables to evaluate these schemes separately from the scheduling algorithm itself and to compare them to the optimal. As a result of this analysis we discovered a classification scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory controller, memory bus and prefetching hardware. To show the applicability of our analysis we design a new scheduling algorithm, which we prototype at user level, and demonstrate that it performs within 2\% of the optimal. We also conclude that the highest impact of contention-aware scheduling techniques is not in improving performance of a workload as a whole but in improving quality of service or performance isolation for individual applications.) <|cite_end|>. However, most of these approaches use brute force search of all possible combinations to find the best cache partitioning in runtime or introduce a lot of overhead. There have been some approaches which use binary search to reduce searching all possible combinations <|cite_start|> (Reference: Fair cache sharing and partitioning in a chip multiprocessor architecture: This paper presents a detailed study of fairness in cache sharing between threads in a chip multiprocessor (CMP) architecture. Prior work in CMP architectures has only studied throughput optimization techniques for a shared cache. The issue of fairness in cache sharing, and its relation to throughput, has not been studied. Fairness is a critical issue because the operating system (OS) thread scheduler's effectiveness depends on the hardware to provide fair cache sharing to co-scheduled threads. Without such hardware, serious problems, such as thread starvation and priority inversion, can arise and render the OS scheduler ineffective. This paper makes several contributions. First, it proposes and evaluates five cache fairness metrics that measure the degree of fairness in cache sharing, and shows that two of them correlate very strongly with the execution-time fairness. Execution-time fairness is defined as how uniform the execution times of co-scheduled threads are changed, where each change is relative to the execution time of the same thread running alone. Secondly, using the metrics, the paper proposes static and dynamic L2 cache partitioning algorithms that optimize fairness. The dynamic partitioning algorithm is easy to implement, requires little or no profiling, has low overhead, and does not restrict the cache replacement algorithm to LRU. The static algorithm, although requiring the cache to maintain LRU stack information, can help the OS thread scheduler to avoid cache thrashing. Finally, this paper studies the relationship between fairness and throughput in detail. We found that optimizing fairness usually increases throughput, while maximizing throughput does not necessarily improve fairness. Using a set of co-scheduled pairs of benchmarks, on average our algorithms improve fairness by a factor of 4/spl times/, while increasing the throughput by 15%, compared to a nonpartitioned shared cache.) <|cite_end|> <|cite_start|> (Reference: Gaining Insights into Multicore Cache Partitioning: Bridging the Gap
between Simulation and Real Systems: Cache partitioning and sharing is critical to the effective utilization of multicore processors. However, almost all existing studies have been evaluated by simulation that often has several limitations, such as excessive simulation time, absence of OS activities and proneness to simulation inaccuracy. To address these issues, we have taken an efficient software approach to supporting both static and dynamic cache partitioning in OS through memory address mapping. We have comprehensively evaluated several representative cache partitioning schemes with different optimization objectives, including performance, fairness, and quality of service (QoS). Our software approach makes it possible to run the SPEC CPU2006 benchmark suite to completion. Besides confirming important conclusions from previous work, we are able to gain several insights from whole-program executions, which are infeasible from simulation. For example, giving up some cache space in one program to help another one may improve the performance of both programs for certain workloads due to reduced contention for memory bandwidth. Our evaluation of previously proposed fairness metrics is also significantly different from a simulation-based study. The contributions of this study are threefold. (1) To the best of our knowledge, this is a highly comprehensive execution- and measurement-based study on multicore cache partitioning. This paper not only confirms important conclusions from simulation-based studies, but also provides new insights into dynamic behaviors and interaction effects. (2) Our approach provides a unique and efficient option for evaluating multicore cache partitioning. The implemented software layer can be used as a tool in multicore performance evaluation and hardware design. (3) The proposed schemes can be further refined for OS kernels to improve performance.) <|cite_end|> <|cite_start|> (Reference: Rapidmrc: approximating l2 miss rate curves on commodity systems for online optimizations.: Miss rate curves (MRCs) are useful in a number of contexts. In our research, online L2 cache MRCs enable us to dynamically identify optimal cache sizes when cache-partitioning a shared-cache multicore processor. Obtaining L2 MRCs has generally been assumed to be expensive when done in software and consequently, their usage for online optimizations has been limited. To address these problems and opportunities, we have developed a low-overhead software technique to obtain L2 MRCs online on current processors, exploiting features available in their performance monitoring units so that no changes to the application source code or binaries are required. Our technique, called RapidMRC, requires a single probing period of roughly 221 million processor cycles (147 ms), and subsequently 124 million cycles (83 ms) to process the data. We demonstrate its accuracy by comparing the obtained MRCs to the actual L2 MRCs of 30 applications taken from SPECcpu2006, SPECcpu2000, and SPECjbb2000. We show that RapidMRC can be applied to sizing cache partitions, helping to achieve performance improvements of up to 27%.) <|cite_end|>. But none of these methods are scalable for the future many-core processor designs.\\
\indent There exists prior game-theoretic approaches designing a centralized scheduling framework that aims at a fair optimization of applications' utility <|cite_start|> (Reference: REF: resource elasticity fairness with sharing incentives for multiprocessors: With the democratization of cloud and datacenter computing, users increasingly share large hardware platforms. In this setting, architects encounter two challenges: sharing fairly and sharing multiple resources. Drawing on economic game-theory, we rethink fairness in computer architecture. A fair allocation must provide sharing incentives (SI), envy-freeness (EF), and Pareto efficiency (PE). We show that Cobb-Douglas utility functions are well suited to modeling user preferences for cache capacity and memory bandwidth. And we present an allocation mechanism that uses Cobb-Douglas preferences to determine each user's fair share of the hardware. This mechanism provably guarantees SI, EF, and PE, as well as strategy-proofness in the large (SPL). And it does so with modest performance penalties, less than 10\% throughput loss, relative to an unfair mechanism.) <|cite_end|> <|cite_start|> (Reference: Cooper: Task colocation with cooperative games: Task colocation improves datacenter utilization but introduces resource contention for shared hardware. In this setting, a particular challenge is balancing performance and fairness. We present Cooper, a game-theoretic framework for task colocation that provides fairness while preserving performance. Cooper predicts users' colocation preferences and finds stable matches between them. Its colocations satisfy preferences and encourage strategic users to participate inshared systems. Given Cooper's colocations, users' performance penalties are strongly correlated to their contributions to contention, which is fair according to cooperative game theory. Moreover, its colocations perform within 5% of prior heuristics.) <|cite_end|> <|cite_start|> (Reference: Dominant Resource Fairness: Fair Allocation of Multiple Resource Types: In this report I will first describe the work of Ghodsi et. al and their contributions. I will explain the notion of Dominant Resource Fairness in policies for the allocation of multi-type resources, and describe the analysis of its valuable properties. In the second part I will discuss my attempts in extending these results in 3 new directions: analysis of DRF in the discrete case (when only whole tasks can be executed), resource utilization of DRF solutions, and DRF with assignment constraints.) <|cite_end|> <|cite_start|> (Reference: Sharing incentives and fair division for multiprocessors: The trend in datacenter computing is toward large, shared hardware platforms, which poses two challenges to architects: sharing fairly and sharing multiple resources. Drawing on economic game theory, the authors rethink fairness in computer architecture and propose Resource Elasticity Fairness to find fair allocations that ensure sharing incentives, envy-freeness, Pareto efficiency, and strategy proofness in large systems.) <|cite_end|> <|cite_start|> (Reference: The computational sprinting game: Computational sprinting is a class of mechanisms that boost performance but dissipate additional power. We describe a sprinting architecture in which many, independent chip multiprocessors share a power supply and sprints are constrained by the chips' thermal limits and the rack's power limits. Moreover, we present the computational sprinting game, a multi-agent perspective on managing sprints. Strategic agents decide whether to sprint based on application phases and system conditions. The game produces an equilibrium that improves task throughput for data analytics workloads by 4-6× over prior greedy heuristics and performs within 90% of an upper bound on throughput from a globally optimized policy.) <|cite_end|>. Zahedi et al. in REF <|cite_start|> (Reference: REF: resource elasticity fairness with sharing incentives for multiprocessors: With the democratization of cloud and datacenter computing, users increasingly share large hardware platforms. In this setting, architects encounter two challenges: sharing fairly and sharing multiple resources. Drawing on economic game-theory, we rethink fairness in computer architecture. A fair allocation must provide sharing incentives (SI), envy-freeness (EF), and Pareto efficiency (PE). We show that Cobb-Douglas utility functions are well suited to modeling user preferences for cache capacity and memory bandwidth. And we present an allocation mechanism that uses Cobb-Douglas preferences to determine each user's fair share of the hardware. This mechanism provably guarantees SI, EF, and PE, as well as strategy-proofness in the large (SPL). And it does so with modest performance penalties, less than 10\% throughput loss, relative to an unfair mechanism.) <|cite_end|> <|cite_start|> (Reference: Sharing incentives and fair division for multiprocessors: The trend in datacenter computing is toward large, shared hardware platforms, which poses two challenges to architects: sharing fairly and sharing multiple resources. Drawing on economic game theory, the authors rethink fairness in computer architecture and propose Resource Elasticity Fairness to find fair allocations that ensure sharing incentives, envy-freeness, Pareto efficiency, and strategy proofness in large systems.) <|cite_end|> use the Cobb-Douglas production function as a fair allocator for cache and memory bandwidth. They show that the Cobb-Douglas function provides game-theoretic properties such as sharing incentives, envy-freedom, and Pareto efficiency. But their approach is still centralized and spatially divides the shared resources to enforce a fair near-optimal policy sacrificing the performance. In their approach, the centralized scheduler assumes all applications have the same priority for cache and memory bandwidth, while we do not have any assumption on this. Further, our auction-based resource allocation can be used for any number of resources and any priority for each application and the centralized scheduler does not need to have a global knowledge of these priorities. \\
\indent Ghodsi et al. in DRF <|cite_start|> (Reference: Dominant Resource Fairness: Fair Allocation of Multiple Resource Types: In this report I will first describe the work of Ghodsi et. al and their contributions. I will explain the notion of Dominant Resource Fairness in policies for the allocation of multi-type resources, and describe the analysis of its valuable properties. In the second part I will discuss my attempts in extending these results in 3 new directions: analysis of DRF in the discrete case (when only whole tasks can be executed), resource utilization of DRF solutions, and DRF with assignment constraints.) <|cite_end|> use another centralized fair policy to maximize the dominant resource utilization. But in practice, it is not possible to clone any number of instances of each resource.
Cooper <|cite_start|> (Reference: Cooper: Task colocation with cooperative games: Task colocation improves datacenter utilization but introduces resource contention for shared hardware. In this setting, a particular challenge is balancing performance and fairness. We present Cooper, a game-theoretic framework for task colocation that provides fairness while preserving performance. Cooper predicts users' colocation preferences and finds stable matches between them. Its colocations satisfy preferences and encourage strategic users to participate inshared systems. Given Cooper's colocations, users' performance penalties are strongly correlated to their contributions to contention, which is fair according to cooperative game theory. Moreover, its colocations perform within 5% of prior heuristics.) <|cite_end|> enhances REF to capture colocated applications fairly, but it only addresses the special case of having two sets of applications with matched resources. Fan et al. <|cite_start|> (Reference: The computational sprinting game: Computational sprinting is a class of mechanisms that boost performance but dissipate additional power. We describe a sprinting architecture in which many, independent chip multiprocessors share a power supply and sprints are constrained by the chips' thermal limits and the rack's power limits. Moreover, we present the computational sprinting game, a multi-agent perspective on managing sprints. Strategic agents decide whether to sprint based on application phases and system conditions. The game produces an equilibrium that improves task throughput for data analytics workloads by 4-6× over prior greedy heuristics and performs within 90% of an upper bound on throughput from a globally optimized policy.) <|cite_end|> exploits computational sprinting architecture to improve task throughput assuming a class of applications where boosting their performance by increasing the power. \\
\indent While all prior works use a centralized scheduling that provides fairness and assumes the same utility function for all, co-runners might have completely diverse needs and it is not efficient to use the same fairness/performance policy across them.
Our auction-based resource scheduling provides scalability since individual applications compete for the shared resources based on their utility and the burden of decision making is removed from the central scheduler. We believe that future CMPs should move toward a more decentralized approach which is more scalable and provides a fair allocation of resources based on the applications' needs. \\
\indent Auction theory which is a subfield of economics has recently been used as a tool to solve large-scale resource assignment in cloud computing <|cite_start|> (Reference: Auction Theory: Auctions have become an important mechanism to allocate scarce resources, in the physical world and in the virtual world. The analysis of auction often proceeds by assuming an ex-ante symmetry among the bidders. Yet, many markets are distinguished by fundamental asymmetries between the bidders. Some bidders may have on average higher valuations or better information, or may have a richer strategy sets. Until today very little is known how classic auction results perform in such asymmetric environments. This research project aims to develop the first such results. The research project is using advanced mathematics in optimization and differential equations, and thus will require an advanced background in mathematics, and might be ideal for majors in mathematics and joint majors in mathematics.) <|cite_end|> <|cite_start|> (Reference: Auctions and bidding: A guide for computer scientists: There is a veritable menagerie of auctions—single-dimensional, multi-dimensional, single-sided, double-sided, first-price, second-price, English, Dutch, Japanese, sealed-bid—and these have been extensively discussed and analyzed in the economics literature. The main purpose of this article is to survey this literature from a computer science perspective, primarily from the viewpoint of computer scientists who are interested in learning about auction theory, and to provide pointers into the economics literature for those who want a deeper technical understanding. In addition, since auctions are an increasingly important topic in computer science, we also look at work on auctions from the computer science literature. Overall, our aim is to identifying what both these bodies of work these tell us about creating electronic auctions.) <|cite_end|>. In an auction process, the buyers submit bids to get the commodities and sellers want to sell their commodities with the maximum price as possible. Also auction-based allocators are multi-buyers with multi-seller but there is only one resource to bid. So, they cannot be used for our purpose, since we have only one seller with multiple bundled resources. That is why we choose a simpler related scheme for a computer architecture to get higher performance with lower transactions and auctions. \\
\indent Our auction-based algorithm is inspired by work of Bertsekas that uses an auction-based approach for network flow problems. Our algorithm is an extension of local assignment problem proposed by Bertsekas et al. that has been shown to converge to the global assignment within a linear approximation.
\vspace{-1\baselineskip} <|paper_end|> | [
"<|reference_start|> evaluating cloud workload characteristics: <|reference_end|>",
"<|reference_start|> Gaining Insights into Multicore Cache Partitioning: Bridging the Gap\nbetween Simulation and Real Systems: Cache partitioning and sharing is critical to the effective utilization of multicore processors. However, almost all existing studies have been evaluated by simulation that often has several limitations, such as excessive simulation time, absence of OS activities and proneness to simulation inaccuracy. To address these issues, we have taken an efficient software approach to supporting both static and dynamic cache partitioning in OS through memory address mapping. We have comprehensively evaluated several representative cache partitioning schemes with different optimization objectives, including performance, fairness, and quality of service (QoS). Our software approach makes it possible to run the SPEC CPU2006 benchmark suite to completion. Besides confirming important conclusions from previous work, we are able to gain several insights from whole-program executions, which are infeasible from simulation. For example, giving up some cache space in one program to help another one may improve the performance of both programs for certain workloads due to reduced contention for memory bandwidth. Our evaluation of previously proposed fairness metrics is also significantly different from a simulation-based study. The contributions of this study are threefold. (1) To the best of our knowledge, this is a highly comprehensive execution- and measurement-based study on multicore cache partitioning. This paper not only confirms important conclusions from simulation-based studies, but also provides new insights into dynamic behaviors and interaction effects. (2) Our approach provides a unique and efficient option for evaluating multicore cache partitioning. The implemented software layer can be used as a tool in multicore performance evaluation and hardware design. (3) The proposed schemes can be further refined for OS kernels to improve performance. <|reference_end|>",
"<|reference_start|> Sharing incentives and fair division for multiprocessors: The trend in datacenter computing is toward large, shared hardware platforms, which poses two challenges to architects: sharing fairly and sharing multiple resources. Drawing on economic game theory, the authors rethink fairness in computer architecture and propose Resource Elasticity Fairness to find fair allocations that ensure sharing incentives, envy-freeness, Pareto efficiency, and strategy proofness in large systems. <|reference_end|>",
"<|reference_start|> Auctions and bidding: A guide for computer scientists: There is a veritable menagerie of auctions—single-dimensional, multi-dimensional, single-sided, double-sided, first-price, second-price, English, Dutch, Japanese, sealed-bid—and these have been extensively discussed and analyzed in the economics literature. The main purpose of this article is to survey this literature from a computer science perspective, primarily from the viewpoint of computer scientists who are interested in learning about auction theory, and to provide pointers into the economics literature for those who want a deeper technical understanding. In addition, since auctions are an increasingly important topic in computer science, we also look at work on auctions from the computer science literature. Overall, our aim is to identifying what both these bodies of work these tell us about creating electronic auctions. <|reference_end|>"
] | [
10,
14,
40,
45
] | {"<|multi_cite_1_1|>": "ss-2020668", "<|multi_cite_1_2|>": "ss-1423099", "<|multi_cite_1_3|>": "ss-906345", "<|multi_cite_1_4|>": "ss-2331003", "<|multi_cite_1_5|>": "ss-2405578", "<|multi_cite_1_6|>": "ss-2405579", "<|multi_cite_1_7|>": "ss-2405580", "<|multi_cite_1_8|>": "arxiv-102110", "<|multi_cite_1_9|>": "ss-2405581", "<|multi_cite_1_10|>": "arxiv-102121", "<|multi_cite_1_11|>": "ss-2405582", "<|cite_2|>": "ss-1714011", "<|multi_cite_3_1|>": "ss-1423099", "<|multi_cite_3_2|>": "ss-1437976", "<|multi_cite_3_3|>": "ss-1821174", "<|multi_cite_3_4|>": "ss-1716430", "<|multi_cite_3_5|>": "ss-1714011", "<|multi_cite_3_6|>": "ss-2405583", "<|multi_cite_3_7|>": "ss-1981043", "<|cite_4|>": "ss-2405578", "<|cite_5|>": "ss-906345", "<|multi_cite_6_1|>": "ss-1437976", "<|multi_cite_6_2|>": "ss-1423099", "<|multi_cite_6_3|>": "ss-1981043", "<|cite_7|>": "ss-1437976", "<|cite_8|>": "ss-1437976", "<|cite_9|>": "ss-1981043", "<|cite_10|>": "ss-1981043", "<|cite_11|>": "ss-1981043", "<|cite_12|>": "ss-1437976", "<|cite_13|>": "ss-1423099", "<|multi_cite_14_1|>": "ss-2331003", "<|multi_cite_14_2|>": "ss-1821174", "<|multi_cite_14_3|>": "ss-2020667", "<|multi_cite_15_1|>": "ss-1453720", "<|multi_cite_15_2|>": "ss-2405584", "<|multi_cite_15_3|>": "ss-750533", "<|multi_cite_15_4|>": "ss-2405585", "<|multi_cite_15_5|>": "ss-1239658", "<|multi_cite_16_1|>": "ss-1453720", "<|multi_cite_16_2|>": "ss-2405585", "<|cite_17|>": "ss-750533", "<|cite_18|>": "ss-2405584", "<|cite_19|>": "ss-1239658", "<|multi_cite_20_1|>": "ss-1291526", "<|multi_cite_20_2|>": "ss-2318946"} |
2010.11703-1 | <|cite_start|> (Reference: Detecting the Expectancy of a Place Using Nearby Context for Appearance-Based Mapping: In recent years, place recognition techniques have been extensively studied in the domain of robotic mapping, referred to as appearance-based mapping. Nonetheless, the majority of these methods focus the challenges of place recognition in offline or supervised scenarios, which in certain conditions, e.g., unknown environments, is infeasible. In this paper, we address the challenges of online place recognition and demonstrate the general applicability of our approach in versatile environments. To this end, a modified growing self-organizing network of neurons is proposed, which incrementally adapts itself to learn the topology of the perceptual space formed by gist features. Given a query image and the network state at any time instant, the expected activity of the network is estimated using a proposed Bayesian framework, and the current place is categorized as familiar or novel. Exhaustive experiments on 11 challenging sequences signify the strength of our algorithm for a reliable and real-time place recognition on routes as large as 18 km. Compared to many state-of-the-art approaches, our method does not need offline training or environment-specific parameter tuning.) <|cite_end|>for learning the topological representation of global gist features <|cite_start|> (Reference: Modeling the Shape of the Scene: A Holistic Representation of the Spatial Envelope: ) <|cite_end|>.
\subsection{Approaches using Convolutional Neural Networks Features}
The impressive performance of CNNs, exhibited on a wide variety of tasks, has been the main reason for their becoming the principal solution to many visual place recognition systems.
Utilizing an end-to-end trainable and generalized VLAD layer <|cite_start|> (Reference: Aggregating local descriptors into a compact image
representation: We address the problem of image search on a very large scale, where three constraints have to be considered jointly: the accuracy of the search, its efficiency, and the memory usage of the representation. We first propose a simple yet efficient way of aggregating local image descriptors into a vector of limited dimension, which can be viewed as a simplification of the Fisher kernel representation. We then show how to jointly optimize the dimension reduction and the indexing algorithm, so that it best preserves the quality of vector comparison. The evaluation shows that our approach significantly outperforms the state of the art: the search accuracy is comparable to the bag-of-features approach for an image representation that fits in 20 bytes. Searching a 10 million image dataset takes about 50ms.) <|cite_end|>, NetVLAD was proposed for similar locations' identification <|cite_start|> (Reference: NetVLAD: CNN architecture for weakly supervised place recognition: We tackle the problem of large scale visual place recognition, where the task is to quickly and accurately recognize the location of a given query photograph. We present the following three principal contributions. First, we develop a convolutional neural network (CNN) architecture that is trainable in an end-to-end manner directly for the place recognition task. The main component of this architecture, NetVLAD, is a new generalized VLAD layer, inspired by the "Vector of Locally Aggregated Descriptors" image representation commonly used in image retrieval. The layer is readily pluggable into any CNN architecture and amenable to training via backpropagation. Second, we develop a training procedure, based on a new weakly supervised ranking loss, to learn parameters of the architecture in an end-to-end manner from images depicting the same places over time downloaded from Google Street View Time Machine. Finally, we show that the proposed architecture significantly outperforms non-learnt image representations and off-the-shelf CNN descriptors on two challenging place recognition benchmarks, and improves over current state-of-the-art compact image representations on standard image retrieval benchmarks.) <|cite_end|>.
A Spatial Pyramid-Enhanced VLAD (SPE-VLAD) layer was proposed by <|cite_start|> (Reference: Spatial Pyramid-Enhanced NetVLAD With Weighted Triplet Loss for Place Recognition: We propose an end-to-end place recognition model based on a novel deep neural network. First, we propose to exploit the spatial pyramid structure of the images to enhance the vector of locally aggregated descriptors (VLAD) such that the enhanced VLAD features can reflect the structural information of the images. To encode this feature extraction into the deep learning method, we build a spatial pyramid-enhanced VLAD (SPE-VLAD) layer. Next, we impose weight constraints on the terms of the traditional triplet loss (T-loss) function such that the weighted T-loss (WT-loss) function avoids the suboptimal convergence of the learning process. The loss function can work well under weakly supervised scenarios in that it determines the semantically positive and negative samples of each query through not only the GPS tags but also the Euclidean distance between the image representations. The SPE-VLAD layer and the WT-loss layer are integrated with the VGG-16 network or ResNet-18 network to form a novel end-to-end deep neural network that can be easily trained via the standard backpropagation method. We conduct experiments on three benchmark data sets, and the results demonstrate that the proposed model defeats the state-of-the-art deep learning approaches applied to place recognition.) <|cite_end|>to encode the feature extraction and improve the loss function.
PCANet employed a cascaded deep network to extract unsupervised features
improving the loop closure detection pipeline <|cite_start|> (Reference: Loop closure detection for visual SLAM using PCANet features: Loop closure detection benefits simultaneous localization and mapping (SLAM) in building a consistent map of the environment by reducing the accumulate error. Handcrafted features have been successfully used in traditional approaches, whereas in this paper, we show that unsupervised features extracted by deep learning models, can improves the accuracy of loop closure detection. In particular, we employ a cascaded deep network, namely the PCANet, to extract features as image descriptors. We tested the performance of our proposed method on open datasets to compare with traditional approaches. We found that the PCANet features outperform state-of-the-art handcrafted competitors, and are computational efficient to be implemented in practical robotics.) <|cite_end|>. <|cite_start|> (Reference: Robust visual semi-semantic loop closure detection by a covisibility graph and CNN features: ) <|cite_end|>proposed a visual scene modeling technique that preserved the geometric and semantic structure and, at the same time, improved the appearance invariance.
A multi-scale pooling exertion allowed for condition- and viewpoint-invariant features to be generated <|cite_start|> (Reference: Deep Learning Features at Scale for Visual Place Recognition: The success of deep learning techniques in the computer vision domain has triggered a range of initial investigations into their utility for visual place recognition, all using generic features from networks that were trained for other types of recognition tasks. In this paper, we train, at large scale, two CNN architectures for the specific place recognition task and employ a multi-scale feature encoding method to generate condition- and viewpoint-invariant features. To enable this training to occur, we have developed a massive Specific PlacEs Dataset (SPED) with hundreds of examples of place appearance change at thousands of different places, as opposed to the semantic place type datasets currently available. This new dataset enables us to set up a training regime that interprets place recognition as a classification problem. We comprehensively evaluate our trained networks on several challenging benchmark place recognition datasets and demonstrate that they achieve an average 10% increase in performance over other place recognition algorithms and pre-trained CNNs. By analyzing the network responses and their differences from pre-trained networks, we provide insights into what a network learns when training for place recognition, and what these results signify for future research in this area.) <|cite_end|>.
Omnidirectional CNN was proposed to mitigate the challenge of extreme camera pose variations <|cite_start|> (Reference: Omnidirectional CNN for Visual Place Recognition and Navigation: $ $Visual place recognition is challenging, especially when only a few place exemplars are given. To mitigate the challenge, we consider place recognition method using omnidirectional cameras and propose a novel Omnidirectional Convolutional Neural Network (O-CNN) to handle severe camera pose variation. Given a visual input, the task of the O-CNN is not to retrieve the matched place exemplar, but to retrieve the closest place exemplar and estimate the relative distance between the input and the closest place. With the ability to estimate relative distance, a heuristic policy is proposed to navigate a robot to the retrieved closest place. Note that the network is designed to take advantage of the omnidirectional view by incorporating circular padding and rotation invariance. To train a powerful O-CNN, we build a virtual world for training on a large scale. We also propose a continuous lifted structured feature embedding loss to learn the concept of distance efficiently. Finally, our experimental results confirm that our method achieves state-of-the-art accuracy and speed with both the virtual world and real-world datasets.) <|cite_end|>.
In the work of <|cite_start|> (Reference: Learning Context Flexible Attention Model for Long-Term Visual Place Recognition: Identifying regions of interest in an image has long been of great importance in a wide range of tasks, including place recognition. In this letter, we propose a novel attention mechanism with flexible context, which can be incorporated into existing feedforward network architecture to learn image representations for long-term place recognition. In particular, in order to focus on regions that contribute positively to place recognition, we introduce a multiscale context-flexible network to estimate the importance of each spatial region in the feature map. Our model is trained end-to-end for place recognition and can detect regions of interest of arbitrary shape. Extensive experiments have been conducted to verify the effectiveness of our approach and the results demonstrate that our model can achieve consistently better performance than the state of the art on standard benchmark datasets. Finally, we visualize the learned attention maps to generate insights into what attention the network has learned.) <|cite_end|>, the authors proposed an attention mechanism capable of being incorporated into an existing feed-forward network architecture to learn image representations for long-term place recognition.
A useful similarity measurement for detecting revisited locations in changing environments was proposed by <|cite_start|> (Reference: Real-Time Visual Place Recognition Based on Analyzing Distribution of Multi-scale CNN Landmarks: ) <|cite_end|>.
The combination of a neural network inspired by the Drosophila olfactory neural circuit with an \mbox{1-\textit{d}} Continuous Attractor Neural Network resulted into a compact system exhibiting high performance <|cite_start|> (Reference: A Hybrid Compact Neural Architecture for Visual Place Recognition: State-of-the-art algorithms for visual place recognition, and related visual navigation systems, can be broadly split into two categories: computer-science-oriented models including deep learning or image retrieval-based techniques with minimal biological plausibility, and neuroscience-oriented dynamical networks that model temporal properties underlying spatial navigation in the brain. In this letter, we propose a new compact and high-performing place recognition model that bridges this divide for the first time. Our approach comprises two key neural models of these categories: (1) FlyNet, a compact, sparse two-layer neural network inspired by brain architectures of fruit flies, Drosophila melanogaster, and (2) a one-dimensional continuous attractor neural network (CANN). The resulting FlyNet+CANN network incorporates the compact pattern recognition capabilities of our FlyNet model with the powerful temporal filtering capabilities of an equally compact CANN, replicating entirely in a hybrid neural implementation the functionality that yields high performance in algorithmic localization approaches like SeqSLAM. We evaluate our model, and compare it to three state-of-the-art methods, on two benchmark real-world datasets with small viewpoint variations and extreme environmental changes - achieving 87% AUC results under day to night transitions compared to 60% for Multi-Process Fusion, 46% for LoST-X and 1% for SeqSLAM, while being 6.5, 310, and 1.5 times faster, respectively.) <|cite_end|>.
Such works commonly used CNNs to extract the global descriptor of a scene, while few of them applied CNNs to extract local information for appearance-based loop closure detection. <|paper_end|> | [
"<|reference_start|> Spatial Pyramid-Enhanced NetVLAD With Weighted Triplet Loss for Place Recognition: We propose an end-to-end place recognition model based on a novel deep neural network. First, we propose to exploit the spatial pyramid structure of the images to enhance the vector of locally aggregated descriptors (VLAD) such that the enhanced VLAD features can reflect the structural information of the images. To encode this feature extraction into the deep learning method, we build a spatial pyramid-enhanced VLAD (SPE-VLAD) layer. Next, we impose weight constraints on the terms of the traditional triplet loss (T-loss) function such that the weighted T-loss (WT-loss) function avoids the suboptimal convergence of the learning process. The loss function can work well under weakly supervised scenarios in that it determines the semantically positive and negative samples of each query through not only the GPS tags but also the Euclidean distance between the image representations. The SPE-VLAD layer and the WT-loss layer are integrated with the VGG-16 network or ResNet-18 network to form a novel end-to-end deep neural network that can be easily trained via the standard backpropagation method. We conduct experiments on three benchmark data sets, and the results demonstrate that the proposed model defeats the state-of-the-art deep learning approaches applied to place recognition. <|reference_end|>",
"<|reference_start|> Robust visual semi-semantic loop closure detection by a covisibility graph and CNN features: <|reference_end|>",
"<|reference_start|> Real-Time Visual Place Recognition Based on Analyzing Distribution of Multi-scale CNN Landmarks: <|reference_end|>",
"<|reference_start|> A Hybrid Compact Neural Architecture for Visual Place Recognition: State-of-the-art algorithms for visual place recognition, and related visual navigation systems, can be broadly split into two categories: computer-science-oriented models including deep learning or image retrieval-based techniques with minimal biological plausibility, and neuroscience-oriented dynamical networks that model temporal properties underlying spatial navigation in the brain. In this letter, we propose a new compact and high-performing place recognition model that bridges this divide for the first time. Our approach comprises two key neural models of these categories: (1) FlyNet, a compact, sparse two-layer neural network inspired by brain architectures of fruit flies, Drosophila melanogaster, and (2) a one-dimensional continuous attractor neural network (CANN). The resulting FlyNet+CANN network incorporates the compact pattern recognition capabilities of our FlyNet model with the powerful temporal filtering capabilities of an equally compact CANN, replicating entirely in a hybrid neural implementation the functionality that yields high performance in algorithmic localization approaches like SeqSLAM. We evaluate our model, and compare it to three state-of-the-art methods, on two benchmark real-world datasets with small viewpoint variations and extreme environmental changes - achieving 87% AUC results under day to night transitions compared to 60% for Multi-Process Fusion, 46% for LoST-X and 1% for SeqSLAM, while being 6.5, 310, and 1.5 times faster, respectively. <|reference_end|>"
] | [
4,
6,
10,
11
] | {"<|multi_cite_13_1|>": "ss-1270922", "<|multi_cite_13_2|>": "ss-1256616", "<|cite_14|>": "ss-997999", "<|cite_15|>": "ss-1279151", "<|multi_cite_16_1|>": "ss-1162551", "<|multi_cite_16_2|>": "ss-1326238", "<|multi_cite_16_3|>": "ss-1378920", "<|multi_cite_16_4|>": "ss-1962540", "<|multi_cite_17_1|>": "ss-2334805", "<|multi_cite_17_3|>": "ss-865136", "<|multi_cite_17_4|>": "ss-1962541", "<|multi_cite_17_5|>": "ss-1962542", "<|cite_18|>": "arxiv-183224", "<|cite_19|>": "arxiv-153301", "<|multi_cite_20_1|>": "ss-976815", "<|multi_cite_20_2|>": "ss-1291398", "<|multi_cite_20_3|>": "ss-1962543", "<|multi_cite_20_4|>": "ss-1133263", "<|multi_cite_21_1|>": "ss-1063541", "<|multi_cite_21_2|>": "ss-828463", "<|multi_cite_21_3|>": "ss-1417429", "<|multi_cite_21_4|>": "ss-1962544", "<|multi_cite_21_5|>": "ss-691356", "<|cite_22|>": "ss-1962545", "<|cite_23|>": "ss-1213862", "<|cite_24|>": "ss-820446", "<|multi_cite_25_1|>": "ss-1286916", "<|multi_cite_25_3|>": "ss-1962535", "<|multi_cite_25_4|>": "ss-1962536", "<|multi_cite_25_5|>": "ss-1962546", "<|cite_26|>": "ss-2373732", "<|cite_27|>": "ss-1962547", "<|cite_28|>": "arxiv-50822", "<|multi_cite_29_1|>": "ss-865137", "<|multi_cite_29_2|>": "ss-1260021", "<|multi_cite_29_3|>": "arxiv-641290", "<|multi_cite_30_1|>": "arxiv-59104", "<|multi_cite_30_2|>": "arxiv-71656", "<|multi_cite_30_3|>": "arxiv-76386", "<|multi_cite_30_4|>": "arxiv-95324", "<|multi_cite_30_5|>": "arxiv-236072", "<|cite_31|>": "ss-690198", "<|cite_32|>": "ss-1022003", "<|cite_33|>": "arxiv-112933", "<|cite_34|>": "ss-779731", "<|cite_35|>": "arxiv-236072", "<|cite_36|>": "arxiv-145365", "<|cite_37|>": "arxiv-94940", "<|cite_38|>": "ss-828463", "<|cite_39|>": "ss-1712652", "<|cite_40|>": "arxiv-236072", "<|multi_cite_41_1|>": "ss-1162551", "<|multi_cite_41_2|>": "ss-1326238", "<|cite_42|>": "ss-1099868", "<|cite_43|>": "ss-865136", "<|cite_1|>": "ss-1286916", "<|cite_44|>": "ss-691356", "<|cite_46|>": "ss-1427815", "<|multi_cite_2_1|>": "ss-1962535", "<|multi_cite_2_2|>": "ss-1962536", "<|cite_47|>": "ss-1962542", "<|cite_3|>": "ss-1260021", "<|cite_48|>": "ss-2373732", "<|cite_49|>": "arxiv-641290", "<|cite_5|>": "ss-1962537", "<|cite_50|>": "ss-1320524", "<|cite_6|>": "ss-1962538", "<|cite_51|>": "ss-1962540", "<|cite_52|>": "ss-1962548", "<|cite_53|>": "ss-1962549", "<|cite_8|>": "ss-904568", "<|cite_54|>": "ss-976815", "<|cite_55|>": "ss-1091343", "<|cite_56|>": "arxiv-87852", "<|cite_9|>": "ss-1513459", "<|cite_58|>": "ss-1962550", "<|cite_10|>": "ss-1141151", "<|cite_59|>": "arxiv-114630", "<|cite_60|>": "arxiv-151259", "<|cite_11|>": "ss-1260030", "<|cite_12|>": "ss-1962539", "<|cite_61|>": "arxiv-228985"} |
2201.06853-0 | <|paper_start|> Title: VAR-DRAM: Variation-Aware Framework for Efficient Dynamic Random Access Memory Design
Abstract: VAR-DRAM: Variation-Aware Framework for Efficient Dynamic Random Access Memory Design: Dynamic Random Access Memory (DRAM) is the de-facto choice for main memory devices due to its cost-effectiveness. It offers a larger capacity and higher bandwidth compared to SRAM but is slower than the latter. With each passing generation, DRAMs are becoming denser. One of its side-effects is the deviation of nominal parameters: process, voltage, and temperature. DRAMs are often considered as the bottleneck of the system as it trades off performance with capacity. With such inherent limitations, further deviation from nominal specifications is undesired. In this paper, we investigate the impact of variations in conventional DRAM devices on the aspects of performance, reliability, and energy requirements. Based on this study, we model a variation-aware framework, called VAR-DRAM, targeted for modern-day DRAM devices. It provides enhanced power management by taking variations into account. VAR-DRAM ensures faster execution of programs as it internally remaps data from variation affected cells to normal cells and also ensures data preservation. On extensive experimentation, we find that VAR-DRAM achieves peak energy savings of up to 48.8% with an average of 29.54% on DDR4 memories while improving the access latency of the DRAM compared to a variation affected device by 7.4%.
Introduction
\label{sec:intro}
In current computing systems, Dynamic Random Access Memory (DRAM) based main memories constitute a significant portion of the total power consumption of the system. For example, studies done such in <|cite_start|> (Reference: The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines, Second Edition: Abstract As computation continues to move into the cloud, the computing platform of interest no longer resembles a pizza box or a refrigerator, but a warehouse full of computers. These new large datacenters are quite different from traditional hosting facilities of earlier times and cannot be viewed simply as a collection of co-located servers. Large portions of the hardware and software resources in these facilities must work in concert to efficiently deliver good levels of Internet service performance, something that can only be achieved by a holistic approach to their design and deployment. In other words, we must treat the datacenter itself as one massive warehouse-scale computer (WSC). We describe the architecture of WSCs, the main factors influencing their design, operation, and cost structure, and the characteristics of their software base. We hope it will be useful to architects and programmers of today’s WSCs, as well as those of future many-core platforms which may one day implement the equivale...) <|cite_end|>show that DRAM devices consume up to 40\% power in server-class systems and up to 50\% power in graphics cards. Towards lowering the power that is consumed by DRAMs, vendors have implemented different power-aware DRAMs designs such as DDRx memory <|cite_start|> (Reference: Error Rate Estimation of DDR4-SDRAM Buffers in Space Mass Memories: DDR4-SDRAMs are key components widely used in modern computing systems on ground and in future space applications, where the sensitivity to ionizing radiation effects and the corresponding data integrity performance is of special interest. In this paper, an example DDR4-SDRAM buffer memory partition inside a high-performance mass memory system is described. For this buffer, two different EDAC implementations, which are Reed-Solomon single-symbol-error-correction and Reed-Solomon double-symbol-error-correction are compared in terms of data integrity performance. This comparison is based on the word error probabilities taking into account DDR4-SDRAM component specific single event effects and possible mitigation such as scrubbing and power cycling. The approach described in this work quantifies the design decision for a certain EDAC architecture as well as highlighting the impact of design parameters such as scrubbing and power cycling periods.) <|cite_end|>, LPDDRx (low power DDR) etc.). In these aforementioned technologies, the reduction in power consumption is achieved ue to the advancement of CMOS process technologies. Subsequent reductions are possible through scaling the supply voltage to the core DRAM array as well as the peripheral circuitry. In other directions, the power consumed by DRAM can be reduced via scaling of the supply voltage, and putting the different DRAM banks to different power states. Such variations have been modeled in works <|cite_start|> (Reference: VARIUS: A Model of Process Variation and Resulting Timing Errors for Microarchitects: Within-die parameter variation poses a major challenge to high-performance microprocessor design, negatively impacting a processor's frequency and leakage power. Addressing this problem, this paper proposes a microarchitecture-aware model for process variation-including both random and systematic effects. The model is specified using a small number of highly intuitive parameters. Using the variation model, this paper also proposes a framework to model timing errors caused by parameter variation. The model yields the failure rate of microarchitectural blocks as a function of clock frequency and the amount of variation. With the combination of the variation model and the error model, we have VARIUS, a comprehensive model that is capable of producing detailed statistics of timing errors as a function of different process parameters and operating conditions. We propose possible applications of VARIUS to microarchitectural research.) <|cite_end|> <|cite_start|> (Reference: {Process Variation-Aware Nonuniform Cache Management in a 3D Die-Stacked Multicore Processor: Process variations in integrated circuits have significant impact on their performance, leakage, and stability. This is particularly evident in large, regular, and dense structures such as DRAMs. D...) <|cite_end|>to identify the decaying portions of the chip.
The classic monolithic design of a typical DRAM has been enhanced to accommodate better performance in minimal power requirements. A modern-day DRAM device is organized into ranks and banks. Although this memory layer may not be active at all times, the capacitor-based chip has to be refreshed in order to maintain the integrity of the data it contains. It has been well established that ranks can be transitioned into \textit{low power mode} <|cite_start|> (Reference: {Hardware and Software Techniques for Controlling DRAM Power Modes: The anticipated explosive growth of pervasive and mobile computing devices that are typically constrained by energy has brought hardware and software techniques for energy conservation into the spotlight. While there have been several studies and proposals for energy conservation for CPUs and peripherals, energy optimization techniques for selective operating mode control of DRAMs have not been fully explored. It has been shown that, for some systems, as much as 90 percent of overall system energy (excluding I/O) is consumed by the DRAM modules, thus, they serve as a good candidate for energy optimizations. Further, DRAM technology has also matured to provide several low energy operating modes (power modes), making it an opportunistic moment to conduct studies exploring the potential benefits of mode control techniques. This paper conducts an in-depth investigation of software and hardware techniques to take advantage of the DRAM mode control capabilities at a module granularity for energy savings. Using a memory system architecture capturing five different energy modes and corresponding resynchronization times, this paper presents several novel compilation techniques to both cluster the data across memory banks as well as to detect module idleness and perform energy mode transitions. In addition, hardware-assisted approaches (called self-monitoring) based on predictions of module interaccess times are proposed. These techniques are extensively evaluated using a set of a dozen benchmarks. It is shown that we get an average of 61 percent savings in DRAM energy using compiler-directed mode control. One of the self-monitored approaches gives as much as 89 percent savings (72 percent on the average), coming as close as 8.8 percent to the optimal energy savings that one can expect with DRAM module mode control. The optimization techniques are demonstrated to be invaluable for energy savings as memory technologies continue to evolve.) <|cite_end|>. However, DRAM banks on the other hand, have been rarely exploited for such power-saving techniques. Few works have been done towards this <|cite_start|> (Reference: {Hardware and Software Techniques for Controlling DRAM Power Modes: The anticipated explosive growth of pervasive and mobile computing devices that are typically constrained by energy has brought hardware and software techniques for energy conservation into the spotlight. While there have been several studies and proposals for energy conservation for CPUs and peripherals, energy optimization techniques for selective operating mode control of DRAMs have not been fully explored. It has been shown that, for some systems, as much as 90 percent of overall system energy (excluding I/O) is consumed by the DRAM modules, thus, they serve as a good candidate for energy optimizations. Further, DRAM technology has also matured to provide several low energy operating modes (power modes), making it an opportunistic moment to conduct studies exploring the potential benefits of mode control techniques. This paper conducts an in-depth investigation of software and hardware techniques to take advantage of the DRAM mode control capabilities at a module granularity for energy savings. Using a memory system architecture capturing five different energy modes and corresponding resynchronization times, this paper presents several novel compilation techniques to both cluster the data across memory banks as well as to detect module idleness and perform energy mode transitions. In addition, hardware-assisted approaches (called self-monitoring) based on predictions of module interaccess times are proposed. These techniques are extensively evaluated using a set of a dozen benchmarks. It is shown that we get an average of 61 percent savings in DRAM energy using compiler-directed mode control. One of the self-monitored approaches gives as much as 89 percent savings (72 percent on the average), coming as close as 8.8 percent to the optimal energy savings that one can expect with DRAM module mode control. The optimization techniques are demonstrated to be invaluable for energy savings as memory technologies continue to evolve.) <|cite_end|> <|cite_start|> (Reference: Proceedings of the Seventh International Symposium on High-Performance Computer Architecture (HPCA'01), Nuevo Leone, Mexico, January 20-24, 2001: ) <|cite_end|>, which remains to be exploited on a large scale. One of the prominent reasons for this is due to the fact that DRAM power is largely calculated on a per-rank basis <|cite_start|> (Reference: {Hardware and Software Techniques for Controlling DRAM Power Modes: The anticipated explosive growth of pervasive and mobile computing devices that are typically constrained by energy has brought hardware and software techniques for energy conservation into the spotlight. While there have been several studies and proposals for energy conservation for CPUs and peripherals, energy optimization techniques for selective operating mode control of DRAMs have not been fully explored. It has been shown that, for some systems, as much as 90 percent of overall system energy (excluding I/O) is consumed by the DRAM modules, thus, they serve as a good candidate for energy optimizations. Further, DRAM technology has also matured to provide several low energy operating modes (power modes), making it an opportunistic moment to conduct studies exploring the potential benefits of mode control techniques. This paper conducts an in-depth investigation of software and hardware techniques to take advantage of the DRAM mode control capabilities at a module granularity for energy savings. Using a memory system architecture capturing five different energy modes and corresponding resynchronization times, this paper presents several novel compilation techniques to both cluster the data across memory banks as well as to detect module idleness and perform energy mode transitions. In addition, hardware-assisted approaches (called self-monitoring) based on predictions of module interaccess times are proposed. These techniques are extensively evaluated using a set of a dozen benchmarks. It is shown that we get an average of 61 percent savings in DRAM energy using compiler-directed mode control. One of the self-monitored approaches gives as much as 89 percent savings (72 percent on the average), coming as close as 8.8 percent to the optimal energy savings that one can expect with DRAM module mode control. The optimization techniques are demonstrated to be invaluable for energy savings as memory technologies continue to evolve.) <|cite_end|> <|cite_start|> (Reference: Error Rate Estimation of DDR4-SDRAM Buffers in Space Mass Memories: DDR4-SDRAMs are key components widely used in modern computing systems on ground and in future space applications, where the sensitivity to ionizing radiation effects and the corresponding data integrity performance is of special interest. In this paper, an example DDR4-SDRAM buffer memory partition inside a high-performance mass memory system is described. For this buffer, two different EDAC implementations, which are Reed-Solomon single-symbol-error-correction and Reed-Solomon double-symbol-error-correction are compared in terms of data integrity performance. This comparison is based on the word error probabilities taking into account DDR4-SDRAM component specific single event effects and possible mitigation such as scrubbing and power cycling. The approach described in this work quantifies the design decision for a certain EDAC architecture as well as highlighting the impact of design parameters such as scrubbing and power cycling periods.) <|cite_end|>. DRAM power consumption mechanism, later discussed in Section~\ref{subsec:dram-pow}, has always been centered mostly around a rank-wise approach. This opens up new opportunities for detailed investigations into the power efficiency of DRAM banks. The biggest beneficiary of such a power management technique would be LPDDR memories, installed on battery-operated devices like smartphones etc. However, modifications to the conventional architecture are necessary in order to exploit power savings from individual DRAM banks.
Another problem faced towards recent advancements in technology is the scaling of chips. This has caused a deviation in the process parameters of the digital chip from its nominal specified values. This effect has greatly reduced the ability of uniform performance by a device. Sections of the device under-performs, thus reducing the total throughput of the system. The trend of enhancement is towards reduced chip sizes. With chip size dipping below 32 nm technology node, it has been reported that process, voltage, and temperature (PVT) show a variation from its nominal specifications <|cite_start|> (Reference: VARIUS: A Model of Process Variation and Resulting Timing Errors for Microarchitects: Within-die parameter variation poses a major challenge to high-performance microprocessor design, negatively impacting a processor's frequency and leakage power. Addressing this problem, this paper proposes a microarchitecture-aware model for process variation-including both random and systematic effects. The model is specified using a small number of highly intuitive parameters. Using the variation model, this paper also proposes a framework to model timing errors caused by parameter variation. The model yields the failure rate of microarchitectural blocks as a function of clock frequency and the amount of variation. With the combination of the variation model and the error model, we have VARIUS, a comprehensive model that is capable of producing detailed statistics of timing errors as a function of different process parameters and operating conditions. We propose possible applications of VARIUS to microarchitectural research.) <|cite_end|> <|cite_start|> (Reference: {Process Variation-Aware Nonuniform Cache Management in a 3D Die-Stacked Multicore Processor: Process variations in integrated circuits have significant impact on their performance, leakage, and stability. This is particularly evident in large, regular, and dense structures such as DRAMs. D...) <|cite_end|>. On the application's end, the requirements of both compute-intensive, as well as data-intensive programs, are increasing. With such vivid requirements, one cannot ignore effects like parameter variations. Such effects affect the desired performance. Affected devices are slow, and consume a higher amount of power than their estimated counterparts. This also jeopardizes the reliability of the data present in these chips as the reliability of the chips depends on the process conditions. Variation is induced by several fundamental effects and is a combination of systematic and random effects. While systematic effects include lithographic lens aberrations, random effects include dopant density fluctuation <|cite_start|> (Reference: VARIUS: A Model of Process Variation and Resulting Timing Errors for Microarchitects: Within-die parameter variation poses a major challenge to high-performance microprocessor design, negatively impacting a processor's frequency and leakage power. Addressing this problem, this paper proposes a microarchitecture-aware model for process variation-including both random and systematic effects. The model is specified using a small number of highly intuitive parameters. Using the variation model, this paper also proposes a framework to model timing errors caused by parameter variation. The model yields the failure rate of microarchitectural blocks as a function of clock frequency and the amount of variation. With the combination of the variation model and the error model, we have VARIUS, a comprehensive model that is capable of producing detailed statistics of timing errors as a function of different process parameters and operating conditions. We propose possible applications of VARIUS to microarchitectural research.) <|cite_end|>.
These aforementioned variations give significant opportunities for optimization which motivates our work. Previous works have shown variation prone DRAM devices affect access latency <|cite_start|> (Reference: {Process Variation-Aware Nonuniform Cache Management in a 3D Die-Stacked Multicore Processor: Process variations in integrated circuits have significant impact on their performance, leakage, and stability. This is particularly evident in large, regular, and dense structures such as DRAMs. D...) <|cite_end|>, retention time <|cite_start|> (Reference: RAIDR: Retention-Aware Intelligent DRAM Refresh: Dynamic random-access memory (DRAM) is the building block of modern main memory systems. DRAM cells must be periodically refreshed to prevent loss of data. These refresh operations waste energy and degrade system performance by interfering with memory accesses. The negative effects of DRAM refresh increase as DRAM device capacity increases. Existing DRAM devices refresh all cells at a rate determined by the leakiest cell in the device. However, most DRAM cells can retain data for significantly longer. Therefore, many of these refreshes are unnecessary. In this paper, we propose RAIDR (Retention-Aware Intelligent DRAM Refresh), a low-cost mechanism that can identify and skip unnecessary refreshes using knowledge of cell retention times. Our key idea is to group DRAM rows into retention time bins and apply a different refresh rate to each bin. As a result, rows containing leaky cells are refreshed as frequently as normal, while most rows are refreshed less frequently. RAIDR uses Bloom filters to efficiently implement retention time bins. RAIDR requires no modification to DRAM and minimal modification to the memory controller. In an 8-core system with 32 GB DRAM, RAIDR achieves a 74.6% refresh reduction, an average DRAM power reduction of 16.1%, and an average system performance improvement of 8.6% over existing systems, at a modest storage overhead of 1.25 KB in the memory controller. RAIDR's benefits are robust to variation in DRAM system configuration, and increase as memory capacity increases.) <|cite_end|>, and reliability of the data. We propose a technique to exploit power savings from these variations affected DRAM cells. We observe that powering down these affected cells results in a significant amount of energy savings. These cells, therefore, become ideal candidates for powering down in order to obtain energy savings. In a DRAM device, the last level parallelism is offered at the DRAM bank level. Therefore, we select to switch power from DRAM banks. Another challenge while working with bank switching at regular intervals is that the data contained in these banks which are prone to get lost. Under such circumstances, preserving the data, or the \textit{state} of the DRAM, leads to having both advantages as well as disadvantages. The advantages are that the data need not be re-fetched or re-calculated once the banks are re-opened for use, and subsequent operations can continue in a seamless manner. The major disadvantage of preserving the state is the fact that there exists performance overheads, which may potentially decay the performance on the end application. Several works have proposed techniques where the state is both preserved and destroyed. In our work, we propose a technique that provides state preservation via a very low overhead data structure. The implementation of the data structure incurs low overhead on both performance and additional area consumed.
\subsection{Motivation and Contribution}
Our primary aim is to mitigate variation related challenges that we face in normal DRAM memory. Once variation affected areas are identified, we use an enhanced power management mechanism to power down these under-performing banks while remapping the data present in these cells to normally functioning banks thus maintaining access latency and ensuring higher reliability. The conventional memory controller is modified to account for variation data. The controller now transitions these banks into a new \textit{ultra-low-power mode} using a power gated circuit for DRAM banks, which we refer to as simply \textit{powering down} of banks. As for preserving the data of these powered down banks, we remap both addresses and their respective data present. The remapping logic is flexible, thus allowing us to use it for other applications, which is later discussed in the paper. VAR-DRAM saves up to 48.8\% of DRAM energy, averaging at 29.54\%. VAR-DRAM is a \textit{framework}, where we are potentially able to stack other energy-saving mechanisms to achieve higher energy savings while maintaining a nominal access latency. To verify our proposed technique, we have validated our proposed technique extensively. Collectively, our work provides the following concrete contributions:
\subsubsection{An efficient power management mechanism}
Prior works are done towards bank-level power management does include bank-wise refresh and other software-based power management techniques <|cite_start|> (Reference: Proceedings of the Seventh International Symposium on High-Performance Computer Architecture (HPCA'01), Nuevo Leone, Mexico, January 20-24, 2001: ) <|cite_end|>. However, powering down DRAM banks using power gating is relatively new for a DRAM design. In addition, it is also sparsely explored. Also, we investigate methods for saving power at the bank-level from variation affected under-performing components. This becomes particularly essential in context of battery-operated mobile devices. Lowering power consumption would enable longer battery drain times.
\subsubsection{Memory devices with low latency}
Variations affect the access latency of the memory devices. VAR-DRAM presents a variation-aware address remapping, which favors normally working DRAM cells over variation affected ones, thus reducing the access latency of the memory device.
\subsubsection{A lightweight address remapping logic}
We use a search and space-efficient data structure for remapping called \textit{trie}, which provides better space complexity than tables with logarithmic lookup times. Maintaining it eliminates re-computation costs, as the state of the DRAM is preserved throughout, but possesses an additional storage overhead. Our findings (Section~\ref{sec:result}) clearly show that this overhead is marginal. Moreover, the remapping logic can also be used for remapping weak rows of a DRAM device, minimizing the blocking time of the DRAM device during DRAM refreshes.
The paper is organized as follows. Section~\ref{sec:related} provides details of prior energy-efficient DRAM-related works. Section~\ref{sec:background} then gives an introduction to DRAM devices, variations, and, how DRAM cells are affected by it. Section \ref{sec:method} gives an intrinsic detail on the implementation technique. Section~\ref{sec:eval} lays the foundation of the simulator platform used to evaluate and subsequently discusses the experiments conducted. Section~\ref{sec:result} sheds a light on the results obtained and provides an analysis for the same. Finally, Section \ref{sec:conclusion} concludes this work with scope for improvements and extensions in the future.
Related Work
\label{sec:related}
Exploring power savings in DRAM devices is one of the key techniques employed in power-efficient systems. Studies such as report that DRAM devices consume a significant portion of the system's power. A significant number of prior works have proposed towards saving power for DRAM devices. Delaluz et al. <|cite_start|> (Reference: {Hardware and Software Techniques for Controlling DRAM Power Modes: The anticipated explosive growth of pervasive and mobile computing devices that are typically constrained by energy has brought hardware and software techniques for energy conservation into the spotlight. While there have been several studies and proposals for energy conservation for CPUs and peripherals, energy optimization techniques for selective operating mode control of DRAMs have not been fully explored. It has been shown that, for some systems, as much as 90 percent of overall system energy (excluding I/O) is consumed by the DRAM modules, thus, they serve as a good candidate for energy optimizations. Further, DRAM technology has also matured to provide several low energy operating modes (power modes), making it an opportunistic moment to conduct studies exploring the potential benefits of mode control techniques. This paper conducts an in-depth investigation of software and hardware techniques to take advantage of the DRAM mode control capabilities at a module granularity for energy savings. Using a memory system architecture capturing five different energy modes and corresponding resynchronization times, this paper presents several novel compilation techniques to both cluster the data across memory banks as well as to detect module idleness and perform energy mode transitions. In addition, hardware-assisted approaches (called self-monitoring) based on predictions of module interaccess times are proposed. These techniques are extensively evaluated using a set of a dozen benchmarks. It is shown that we get an average of 61 percent savings in DRAM energy using compiler-directed mode control. One of the self-monitored approaches gives as much as 89 percent savings (72 percent on the average), coming as close as 8.8 percent to the optimal energy savings that one can expect with DRAM module mode control. The optimization techniques are demonstrated to be invaluable for energy savings as memory technologies continue to evolve.) <|cite_end|>exploited the prolonged behavior of idle ranks for saving power. Lebeck et al. <|cite_start|> (Reference: Power aware page allocation: One of the major challenges of post-PC computing is the need to reduce energy consumption, thereby extending the lifetime of the batteries that power these mobile devies. Memory is a particularly important target for efforts to improve energy efficiency. Memory technology is becoming available that offers power management features such as the ability to put individual chips in any one of several different power modes. In this paper we explore the interaction of page placement with static and dynamic hardware policies to exploit these emerging hardware features. In particular, we consider page allocation policies that can be employed by an informed operating system to complement the hardware power management strategies. We perform experiments using two complementary simulation environments: a trace-driven simulator with workload traces that are representative of mobile computing and an execution-driven simulator with a detailed processor/memory model and a more memory-intensive set of benchmarks (SPEC2000). Our results make a compelling case for a cooperative hardware/software approach for exploiting power-aware memory, with down to as little as 45% of the Energy Delay for the best static policy and 1% to 20% of the Energy Delay for a traditional full-power memory.) <|cite_end|>demonstrated how a DRAM rank can be transitioned into low power mode. They proposed a method to cluster DRAM accesses into some specific chips so that idle chips can be transitioned into low power mode. This concept of powering down ranks is then established as a standard technique to save power from DRAM devices. Other techniques based on schedulers were then studied in works including <|cite_start|> (Reference: Power aware page allocation: One of the major challenges of post-PC computing is the need to reduce energy consumption, thereby extending the lifetime of the batteries that power these mobile devies. Memory is a particularly important target for efforts to improve energy efficiency. Memory technology is becoming available that offers power management features such as the ability to put individual chips in any one of several different power modes. In this paper we explore the interaction of page placement with static and dynamic hardware policies to exploit these emerging hardware features. In particular, we consider page allocation policies that can be employed by an informed operating system to complement the hardware power management strategies. We perform experiments using two complementary simulation environments: a trace-driven simulator with workload traces that are representative of mobile computing and an execution-driven simulator with a detailed processor/memory model and a more memory-intensive set of benchmarks (SPEC2000). Our results make a compelling case for a cooperative hardware/software approach for exploiting power-aware memory, with down to as little as 45% of the Energy Delay for the best static policy and 1% to 20% of the Energy Delay for a traditional full-power memory.) <|cite_end|> <|cite_start|> (Reference: {Hardware and Software Techniques for Controlling DRAM Power Modes: The anticipated explosive growth of pervasive and mobile computing devices that are typically constrained by energy has brought hardware and software techniques for energy conservation into the spotlight. While there have been several studies and proposals for energy conservation for CPUs and peripherals, energy optimization techniques for selective operating mode control of DRAMs have not been fully explored. It has been shown that, for some systems, as much as 90 percent of overall system energy (excluding I/O) is consumed by the DRAM modules, thus, they serve as a good candidate for energy optimizations. Further, DRAM technology has also matured to provide several low energy operating modes (power modes), making it an opportunistic moment to conduct studies exploring the potential benefits of mode control techniques. This paper conducts an in-depth investigation of software and hardware techniques to take advantage of the DRAM mode control capabilities at a module granularity for energy savings. Using a memory system architecture capturing five different energy modes and corresponding resynchronization times, this paper presents several novel compilation techniques to both cluster the data across memory banks as well as to detect module idleness and perform energy mode transitions. In addition, hardware-assisted approaches (called self-monitoring) based on predictions of module interaccess times are proposed. These techniques are extensively evaluated using a set of a dozen benchmarks. It is shown that we get an average of 61 percent savings in DRAM energy using compiler-directed mode control. One of the self-monitored approaches gives as much as 89 percent savings (72 percent on the average), coming as close as 8.8 percent to the optimal energy savings that one can expect with DRAM module mode control. The optimization techniques are demonstrated to be invaluable for energy savings as memory technologies continue to evolve.) <|cite_end|>. The authors in proposed a method to dynamically map addresses into some specific set of banks in order to exploit power savings.
Hassan et al. <|cite_start|> (Reference: CROW: A Low-Cost Substrate for Improving DRAM Performance, Energy Efficiency, and Reliability: DRAM has been the dominant technology for architecting main memory for decades. Recent trends in multi-core system design and large-dataset applications have amplified the role of DRAM as a critical system bottleneck. We propose Copy-Row DRAM (CROW), a flexible substrate that enables new mechanisms for improving DRAM performance, energy efficiency, and reliability. We use the CROW substrate to implement 1) a low-cost in-DRAM caching mechanism that lowers DRAM activation latency to frequently-accessed rows by 38% and 2) a mechanism that avoids the use of short-retention-time rows to mitigate the performance and energy overhead of DRAM refresh operations. CROW's flexibility allows the implementation of both mechanisms at the same time. Our evaluations show that the two mechanisms synergistically improve system performance by 20.0% and reduce DRAM energy by 22.3% for memory-intensive four-core workloads, while incurring 0.48% extra area overhead in the DRAM chip and 11.3KiB storage overhead in the memory controller, and consuming 1.6% of DRAM storage capacity, for one particular implementation.) <|cite_end|>proposed an In-DRAM cache for DRAM-based memory systems. The proposed model makes a copy of a DRAM row on the cache. Later, during the execution of the program, if the same row is referenced, it is activated simultaneously from both the primary array and, as well as from the In-DRAM cache (also known as CROW-cache). Another aspect of saving power in DRAM devices is centered around periodic DRAM refreshes. In the work <|cite_start|> (Reference: RAIDR: Retention-Aware Intelligent DRAM Refresh: Dynamic random-access memory (DRAM) is the building block of modern main memory systems. DRAM cells must be periodically refreshed to prevent loss of data. These refresh operations waste energy and degrade system performance by interfering with memory accesses. The negative effects of DRAM refresh increase as DRAM device capacity increases. Existing DRAM devices refresh all cells at a rate determined by the leakiest cell in the device. However, most DRAM cells can retain data for significantly longer. Therefore, many of these refreshes are unnecessary. In this paper, we propose RAIDR (Retention-Aware Intelligent DRAM Refresh), a low-cost mechanism that can identify and skip unnecessary refreshes using knowledge of cell retention times. Our key idea is to group DRAM rows into retention time bins and apply a different refresh rate to each bin. As a result, rows containing leaky cells are refreshed as frequently as normal, while most rows are refreshed less frequently. RAIDR uses Bloom filters to efficiently implement retention time bins. RAIDR requires no modification to DRAM and minimal modification to the memory controller. In an 8-core system with 32 GB DRAM, RAIDR achieves a 74.6% refresh reduction, an average DRAM power reduction of 16.1%, and an average system performance improvement of 8.6% over existing systems, at a modest storage overhead of 1.25 KB in the memory controller. RAIDR's benefits are robust to variation in DRAM system configuration, and increase as memory capacity increases.) <|cite_end|>, the authors propose a selective refresh on variation affected cells in order to maintain the data properly and eliminate unnecessary refreshes. Bandwidth-aware power management <|cite_start|> (Reference: 2018 IEEE International Conference on Consumer Electronics (ICCE): ) <|cite_end|> <|cite_start|> (Reference: {An Approach for Adaptive DRAM Temperature and Power Management: High-performance DRAMs are providing increasing memory access bandwidth to processors, which is leading to high power consumption and operating temperature in DRAM chips. In this paper, we propose a customized low-power technique for high-performance DRAM systems to improve DRAM page hit rate by buffering write operations that may incur page misses. This approach reduces DRAM system power consumption and temperature without any performance penalty. We combine the throughput-aware page-hit-aware write buffer (TAP) with low-power-state-based techniques for further power and temperature reduction, namely, TAP-low. Our experiments show that a system with TAP-low could reduce the total DRAM power consumption by up to 68.6% (19.9% on average). The steady-state temperature can be reduced by as much as 7.84°C and 2.55°C on average across eight representative workloads.) <|cite_end|>for higher DRAM page hit rate is another technique to reduce the power requirements of DRAM devices. Lee et al. <|cite_start|> (Reference: IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2023, Foz do Iguacu, Brazil, June 20-23, 2023: ) <|cite_end|>proposed a bandwidth-aware page migration technique for heterogeneous memories.
Shutting down DRAM banks or transitioning DRAM banks into low power mode is a topic with limited study. Prior works on this aspect include <|cite_start|> (Reference: Proceedings of the Seventh International Symposium on High-Performance Computer Architecture (HPCA'01), Nuevo Leone, Mexico, January 20-24, 2001: ) <|cite_end|> <|cite_start|> (Reference: 20th International Symposium on Quality Electronic Design, ISQED 2019, Santa Clara, CA, USA, March 6-7, 2019: ) <|cite_end|>, wherein the former, authors propose both software and hardware-based techniques to exploit power savings from DRAM banks. The latter attempts to close under-utilized DRAM banks in order to save power. Another work includes a bank-sensitive power model for a DRAM power simulator called DRAMPower. The existing implementation of a DRAM module is biased towards power management in a rank-wise manner <|cite_start|> (Reference: Error Rate Estimation of DDR4-SDRAM Buffers in Space Mass Memories: DDR4-SDRAMs are key components widely used in modern computing systems on ground and in future space applications, where the sensitivity to ionizing radiation effects and the corresponding data integrity performance is of special interest. In this paper, an example DDR4-SDRAM buffer memory partition inside a high-performance mass memory system is described. For this buffer, two different EDAC implementations, which are Reed-Solomon single-symbol-error-correction and Reed-Solomon double-symbol-error-correction are compared in terms of data integrity performance. This comparison is based on the word error probabilities taking into account DDR4-SDRAM component specific single event effects and possible mitigation such as scrubbing and power cycling. The approach described in this work quantifies the design decision for a certain EDAC architecture as well as highlighting the impact of design parameters such as scrubbing and power cycling periods.) <|cite_end|>. Low power memory technologies like LPDDR DRAM have a provision of a bank-wise refresh instead of an all-bank refresh. The DRAM simulator called DRAMSim2 <|cite_start|> (Reference: Dramsim2: A cycle accurate memory system simulator: In this paper we present DRAMSim2, a cycle accurate memory system simulator. The goal of DRAMSim2 is to be an accurate and publicly available DDR2/3 memory system model which can be used in both full system and trace-based simulations. We describe the process of validating DRAMSim2 timing against manufacturer Verilog models in an effort to prove the accuracy of simulation results. We outline the combination of DRAMSim2 with a cycle-accurate x86 simulator that can be used to perform full system simulations. Finally, we discuss DRAMVis, a visualization tool that can be used to graph and compare the results of DRAMSim2 simulations.) <|cite_end|>also implements a heuristics-based technique to demonstrate low power mode in DRAM systems during simulations. Micron Technologies implements a low power mode for idle ranks for their DDR3 DRAM system where no I/O accesses are allowed to the rank which results in lower consumption of power.
The fabrication process of manufacturing chips is susceptible to variations <|cite_start|> (Reference: {Process Variation-Aware Nonuniform Cache Management in a 3D Die-Stacked Multicore Processor: Process variations in integrated circuits have significant impact on their performance, leakage, and stability. This is particularly evident in large, regular, and dense structures such as DRAMs. D...) <|cite_end|> <|cite_start|> (Reference: VARIUS: A Model of Process Variation and Resulting Timing Errors for Microarchitects: Within-die parameter variation poses a major challenge to high-performance microprocessor design, negatively impacting a processor's frequency and leakage power. Addressing this problem, this paper proposes a microarchitecture-aware model for process variation-including both random and systematic effects. The model is specified using a small number of highly intuitive parameters. Using the variation model, this paper also proposes a framework to model timing errors caused by parameter variation. The model yields the failure rate of microarchitectural blocks as a function of clock frequency and the amount of variation. With the combination of the variation model and the error model, we have VARIUS, a comprehensive model that is capable of producing detailed statistics of timing errors as a function of different process parameters and operating conditions. We propose possible applications of VARIUS to microarchitectural research.) <|cite_end|>. These variations lead to the aberration of nominal parameters of Process (P), Voltage (V), and Temperature (T) of these chips. Variation is a well-studied topic for DRAM devices. Hamamoto et al. investigated DRAM data retention distribution in their work <|cite_start|> (Reference: {On the retention time distribution of dynamic random access memory (DRAM): The retention time distribution of high-density dynamic random access memory (DRAM) has been investigated. The key issue for controlling the retention time distribution has been clarified and its model has been proposed for the first time. Trench capacitor cell with 0.6-/spl mu/m ground rule was evaluated. It was found that the retention time distribution consists of "tail distribution" and "main distribution." "tail distribution," by which DRAM refresh characteristics are restricted, depends on the boron concentration of the memory cell region. As boron concentration of the memory cell region increases, "tail distribution" is enhanced. This enhancement is due to the increase of the junction leakage current from the storage node. For the purpose of accounting for the nature of "Tail Distribution," the concept of thermionic field emission (TFE) current has been introduced. The high electric field at pn junction of the storage node enhances thermionic field emission from a deep level. The activation energy of the deep level is normally distributed among the memory cells, which leads to the normal distribution of log(retention time). Two methods for reducing "tail distribution" are proposed. One is to reduce the electric field of the depletion layer of the storage node. The other is to reduce the concentration of the deep level for TFE current.) <|cite_end|>. They reported that boron concentration in the p-well of memory cells characterizes the data retention distribution. Further, variation affects the reliability of the data present on the device which plays an important role in real-time systems and secured systems where corrupt data can lead to catastrophic consequences. Emerging memories and non-volatile memory technologies are gaining momentum in development as these provide a feasible alternative to DRAM-based memories <|cite_start|> (Reference: {Emerging NVM: A survey on Architectural Integration and Research Challenges: There has been a surge of interest in Non-Volatile Memory (NVM) in recent years. With many advantages, such as density and power consumption, NVM is carving out a place in the memory hierarchy and may eventually change our view of computer architecture. Many NVMs have emerged, such as Magnetoresistive random access memory (MRAM), Phase Change random access memory (PCM), Resistive random access memory (ReRAM), and Ferroelectric random access memory (FeRAM), each with its own peculiar properties and specific challenges. The scientific community has carried out a substantial amount of work on integrating those technologies in the memory hierarchy. As many companies are announcing the imminent mass production of NVMs, we think that it is time to have a step back and discuss the body of literature related to NVM integration. This article surveys state-of-the-art work on integrating NVM into the memory hierarchy. Specially, we introduce the four types of NVM, namely, MRAM, PCM, ReRAM, and FeRAM, and investigate different ways of integrating them into the memory hierarchy from the horizontal or vertical perspectives. Here, horizontal integration means that the new memory is placed at the same level as an existing one, while vertical integration means that the new memory is interleaved between two existing levels. In addition, we describe challenges and opportunities with each NVM technique.) <|cite_end|>. However, these technologies also have variations <|cite_start|> (Reference: Recap of the 23rd Asia and South Pacific Design Automation Conference (ASP-DAC): The 23rd Asia and South Pacific Design Automation Conference (ASP-DAC) was held at the International Convention Center, Jeju Island, South Korea, on 22–25 January 2018. ASP-DAC was founded in 1995 and has continuously offered opportunity for the researchers around Asia and South Pacific regions to communicate with each other and to get exposed to the advanced and recent technologies. This year’s event was sponsored by ACM Special Interest Group on Design Automation (SIGDA), the IEEE Circuits and Systems Society, and the IEEE Council on Electronic Design Automation. We also had two Japanese organizations, Institute of Electronics, Information and Communication Engineers and Information Processing Society of Japan, which supported the conference. We were very fortunate to have a number of corporate sponsors including Cadence, SK Hynix, Mentor, Synopsys, Entasys, Baum, and Silvaco. The conference was attended by 373 people, which is similar to when the conference was held in Chiba, Japan, last year; the largest number of attendees was from South Korea, followed by the United States, Taiwan, China, Japan, Hong Kong, and Germany.) <|cite_end|>. Even flash storage devices like SSD are affected by PV <|cite_start|> (Reference: Improving 3D NAND Flash Memory Lifetime by Tolerating Early Retention Loss and Process Variation: Compared to planar (i.e., two-dimensional) NAND flash memory, 3D NAND flash memory uses a new flash cell design, and vertically stacks dozens of silicon layers in a single chip. This allows 3D NAND flash memory to increase storage density using a much less aggressive manufacturing process technology than planar NAND flash memory. The circuit-level and structural changes in 3D NAND flash memory significantly alter how different error sources affect the reliability of the memory. In this paper, through experimental characterization of real, state-of-the-art 3D NAND flash memory chips, we find that 3D NAND flash memory exhibits three new error sources that were not previously observed in planar NAND flash memory: (1) layer-to-layer process variation, where the average error rate of each 3D-stacked layer in a chip is significantly different; (2) early retention loss, a new phenomenon where the number of errors due to charge leakage increases quickly within several hours after programming; and (3) retention interference, a new phenomenon where the rate at which charge leaks from a flash cell is dependent on the data value stored in the neighboring cell. Based on our experimental results, we develop new analytical models of layer-to-layer process variation and retention loss in 3D NAND flash memory. Motivated by our new findings and models, we develop four new techniques to mitigate process variation and early retention loss in 3D NAND flash memory. These four techniques are complementary, and can be combined together to significantly improve flash memory reliability. Compared to a state-of-the-art baseline, our techniques, when combined, improve flash memory lifetime by 1.85x. Alternatively, if a NAND flash vendor wants to keep the lifetime of the 3D NAND flash memory device constant, our techniques reduce the storage overhead required to hold error correction information by 78.9%.) <|cite_end|>.
Access latency of a DRAM device is affected by PV <|cite_start|> (Reference: VARIUS: A Model of Process Variation and Resulting Timing Errors for Microarchitects: Within-die parameter variation poses a major challenge to high-performance microprocessor design, negatively impacting a processor's frequency and leakage power. Addressing this problem, this paper proposes a microarchitecture-aware model for process variation-including both random and systematic effects. The model is specified using a small number of highly intuitive parameters. Using the variation model, this paper also proposes a framework to model timing errors caused by parameter variation. The model yields the failure rate of microarchitectural blocks as a function of clock frequency and the amount of variation. With the combination of the variation model and the error model, we have VARIUS, a comprehensive model that is capable of producing detailed statistics of timing errors as a function of different process parameters and operating conditions. We propose possible applications of VARIUS to microarchitectural research.) <|cite_end|> <|cite_start|> (Reference: {Process Variation-Aware Nonuniform Cache Management in a 3D Die-Stacked Multicore Processor: Process variations in integrated circuits have significant impact on their performance, leakage, and stability. This is particularly evident in large, regular, and dense structures such as DRAMs. D...) <|cite_end|>. Zhao et al. <|cite_start|> (Reference: {Process Variation-Aware Nonuniform Cache Management in a 3D Die-Stacked Multicore Processor: Process variations in integrated circuits have significant impact on their performance, leakage, and stability. This is particularly evident in large, regular, and dense structures such as DRAMs. D...) <|cite_end|>studied the effects of PV on a DRAM-based last level cache (LLC) where they tried to mitigate slow memory accesses by migrating the data to some other locations of the LLC so that the access latency is maintained uniformly. Sarangi et al. <|cite_start|> (Reference: VARIUS: A Model of Process Variation and Resulting Timing Errors for Microarchitects: Within-die parameter variation poses a major challenge to high-performance microprocessor design, negatively impacting a processor's frequency and leakage power. Addressing this problem, this paper proposes a microarchitecture-aware model for process variation-including both random and systematic effects. The model is specified using a small number of highly intuitive parameters. Using the variation model, this paper also proposes a framework to model timing errors caused by parameter variation. The model yields the failure rate of microarchitectural blocks as a function of clock frequency and the amount of variation. With the combination of the variation model and the error model, we have VARIUS, a comprehensive model that is capable of producing detailed statistics of timing errors as a function of different process parameters and operating conditions. We propose possible applications of VARIUS to microarchitectural research.) <|cite_end|>proposed a statistical method called \textit{VARIUS}, where the decaying portions of a chip can be identified. A study by Ghose et al. <|cite_start|> (Reference: What Your DRAM Power Models Are Not Telling You: Lessons from a Detailed Experimental Study: Main memory (DRAM) consumes as much as half of the total system power in a computer today, resulting in a growing need to develop new DRAM architectures and systems that consume less power. Researchers have long relied on DRAM power models that are based off of standardized current measurements provided by vendors, called IDD values. Unfortunately, we find that these models are highly inaccurate, and do not reflect the actual power consumed by real DRAM devices. We perform the first comprehensive experimental characterization of the power consumed by modern real-world DRAM modules. Our extensive characterization of 50 DDR3L DRAM modules from three major vendors yields four key new observations about DRAM power consumption: (1) across all IDD values that we measure, the current consumed by real DRAM modules varies significantly from the current specified by the vendors; (2) DRAM power consumption strongly depends on the data value that is read or written; (3) there is significant structural variation, where the same banks and rows across multiple DRAM modules from the same model consume more power than other banks or rows; and (4) over successive process technology generations, DRAM power consumption has not decreased by as much as vendor specifications have indicated. Based on our detailed analysis and characterization data, we develop the Variation-Aware model of Memory Power Informed by Real Experiments (VAMPIRE). We show that VAMPIRE has a mean absolute percentage error of only 6.8% compared to actual measured DRAM power. VAMPIRE enables a wide range of studies that were not possible using prior DRAM power models. As an example, we use VAMPIRE to evaluate a new power-aware data encoding mechanism, which can reduce DRAM energy consumption by an average of 12.2%. We plan to open-source both VAMPIRE and our extensive raw data collected during our experimental characterization.) <|cite_end|>concludes that most off-the-shelf DRAM devices are prone to variation. The authors did a study on each granularity of the DRAM device and compared it with state-of-the-art simulator results and concluded that there exists a large difference between real and simulated hardware results. Current DDRx DRAM systems perform row migrations in order to mitigate the issue of PV-affected defective rows. Although this mechanism does mitigate the issue of reliability and access latency, power efficiency, however, is not explored.
Works have also been done towards preserving the data present in the chips in order to not incur an additional re-computation cost. Prior works on saving the state of the data on caches are done in <|cite_start|> (Reference: IEEE Computer Society Annual Symposium on VLSI, ISVLSI 2023, Foz do Iguacu, Brazil, June 20-23, 2023: ) <|cite_end|> <|cite_start|> (Reference: ILP-Based Energy Minimization Techniques for Banked Memories: Main memories can consume a significant portion of overall energy in many data-intensive embedded applications. One way of reducing this energy consumption is banking, that is, dividing available memory space into multiple banks and placing unused (idle) memory banks into low-power operating modes. Prior work investigated code-restructuring- and data-layout-reorganization-based approaches for increasing the energy benefits that could be obtained from a banked memory architecture. This article explores different techniques that can potentially coexist within the same optimization framework for maximizing benefits of low-power operating modes. These techniques include employing nonuniform bank sizes, data migration, data compression, and data replication. By using these techniques, we try to increase the chances for utilizing low-power operating modes in a more effective manner, and achieve further energy savings over what could be achieved by exploiting low-power modes alone. Specifically, nonuniform banking tries to match bank sizes with application-data access patterns. The goal of data migration is to cluster data with similar access patterns in the same set of banks. Data compression reduces the size of the data used by an application, and thus helps reduce the number of memory banks occupied by data. Finally, data replication increases bank idleness by duplicating select read-only data blocks across banks. We formulate each of these techniques as an ILP (integer linear programming) problem, and solve them using a commercial solver. Our experimental analysis using several benchmarks indicates that all the techniques presented in this framework are successful in reducing memory energy consumption. Based on our experience with these techniques, we recommend to compiler writers for banked memories to consider data compression, replication, and migration.) <|cite_end|> <|cite_start|> (Reference: {Data Remapping for Static NUCA in Degradable Chip Multiprocessors: In chip multiprocessors (CMPs), nonuniform cache architecture (NUCA) is often employed to organize last-level cache (LLC) banks through network-on-chip (NoC). Because of the shrinking feature size and unstable operating environment, severe reliability problems unavoidably emerge and cause frequent on-chip component (e.g., cores, cache banks, routers) failures. Typical fault-tolerant CMPs should possess the feature of graceful degradation and function normally with deactivated tiles. However, for CMPs adopting static NUCA, certain physical address areas will become inaccessible when cache banks in a CMP node are isolated from the system. To protect cache from such threats induced by either online or offline faults, we survey several potential solutions and propose the utility-driven node remapping technique by reusing the resources in NoC. In our NoC-assisted remapping scheme, cache accesses to isolated banks are so redirected that cache space contention are successfully balanced and relieved in shared-LLC, thus ensuring the least performance penalty caused by fault isolation. Our experimental results show significant performance improvement over conventional resizing approaches such as set reduction.) <|cite_end|>. In <|cite_start|> (Reference: {Data Remapping for Static NUCA in Degradable Chip Multiprocessors: In chip multiprocessors (CMPs), nonuniform cache architecture (NUCA) is often employed to organize last-level cache (LLC) banks through network-on-chip (NoC). Because of the shrinking feature size and unstable operating environment, severe reliability problems unavoidably emerge and cause frequent on-chip component (e.g., cores, cache banks, routers) failures. Typical fault-tolerant CMPs should possess the feature of graceful degradation and function normally with deactivated tiles. However, for CMPs adopting static NUCA, certain physical address areas will become inaccessible when cache banks in a CMP node are isolated from the system. To protect cache from such threats induced by either online or offline faults, we survey several potential solutions and propose the utility-driven node remapping technique by reusing the resources in NoC. In our NoC-assisted remapping scheme, cache accesses to isolated banks are so redirected that cache space contention are successfully balanced and relieved in shared-LLC, thus ensuring the least performance penalty caused by fault isolation. Our experimental results show significant performance improvement over conventional resizing approaches such as set reduction.) <|cite_end|>, the authors propose a technique to design a reliable SNUCA cache for an NoC based CMP. The authors first demonstrate how the chip will degrade over time and subsequently remapped the data without any loss.
\begin{table}[]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{|l|l|l|l|l|l|l|}
\hline
\multirow{2}{*}{\textbf{Sl. No.}} & \multirow{2}{*}{\textbf{Name}} & \multirow{2}{*}{\textbf{Savings}} & \multicolumn{3}{l|}{\hspace{2cm} \textbf{Overhead}} & \multirow{2}{*}{\textbf{State}} \\ \cline{4-6}
& & & \textbf{Performance} & \textbf{Area} & \textbf{Power} & \\ \hline \hline
1. & Rank Aware <|cite_start|> (Reference: Rank-Aware Dynamic Migrations and Adaptive Demotions for DRAM Power Management: Modern DRAM architectures allow a number of low-power states on individual memory ranks for advanced power management. Many previous studies have taken advantage of demotions on low-power states for energy saving. However, most of the demotion schemes are statically performed on a limited number of pre-selected low-power states, and are suboptimal for different workloads and memory architectures. Even worse, the idle periods are often too short for effective power state transitions, especially for memory intensive applications. Wrong decisions on power state transition incur significant energy and delay penalties. In this paper, we propose a novel memory system design named RAMZzz with rank-aware energy saving optimizations including dynamic page migrations and adaptive demotions. Specifically, we group the pages with similar access locality into the same rank with dynamic page migrations. Ranks have their hotness: hot ranks are kept busy for high utilization and cold ranks can have more lengthy idle periods for power state transitions. We further develop adaptive state demotions by considering all low-power states for each rank and a prediction model to estimate the power-down timeout among states. We experimentally compare our algorithm with other energy saving policies with cycle-accurate simulation. Experiments with benchmark workloads show that RAMZzz achieves significant improvement on energy-delay2 and energy consumption over other energy saving techniques.) <|cite_end|>& 24\% & 4\% & 0.4\% & 1.40\% & Cached \\ \hline
2. & PRA <|cite_start|> (Reference: 2017 IEEE International Symposium on High Performance Computer Architecture, HPCA 2017, Austin, TX, USA, February 4-8, 2017: ) <|cite_end|>& 23\% & - & 3\% of a 2 Gb DRAM & 0.017\% & N/A \\ \hline
3. & RAIDR <|cite_start|> (Reference: RAIDR: Retention-Aware Intelligent DRAM Refresh: Dynamic random-access memory (DRAM) is the building block of modern main memory systems. DRAM cells must be periodically refreshed to prevent loss of data. These refresh operations waste energy and degrade system performance by interfering with memory accesses. The negative effects of DRAM refresh increase as DRAM device capacity increases. Existing DRAM devices refresh all cells at a rate determined by the leakiest cell in the device. However, most DRAM cells can retain data for significantly longer. Therefore, many of these refreshes are unnecessary. In this paper, we propose RAIDR (Retention-Aware Intelligent DRAM Refresh), a low-cost mechanism that can identify and skip unnecessary refreshes using knowledge of cell retention times. Our key idea is to group DRAM rows into retention time bins and apply a different refresh rate to each bin. As a result, rows containing leaky cells are refreshed as frequently as normal, while most rows are refreshed less frequently. RAIDR uses Bloom filters to efficiently implement retention time bins. RAIDR requires no modification to DRAM and minimal modification to the memory controller. In an 8-core system with 32 GB DRAM, RAIDR achieves a 74.6% refresh reduction, an average DRAM power reduction of 16.1%, and an average system performance improvement of 8.6% over existing systems, at a modest storage overhead of 1.25 KB in the memory controller. RAIDR's benefits are robust to variation in DRAM system configuration, and increase as memory capacity increases.) <|cite_end|>& 20\% & - & 0.013 mm\textsuperscript{2} & - & N/A \\ \hline
4. & SPBR <|cite_start|> (Reference: 20th International Symposium on Quality Electronic Design, ISQED 2019, Santa Clara, CA, USA, March 6-7, 2019: ) <|cite_end|>& 9.11\% & 0.82\% & \textless 1\% of 400mm\textsuperscript{2} die & 1.09\% & Preserved \\ \hline
5. & Compiler Directed <|cite_start|> (Reference: Proceedings of the Seventh International Symposium on High-Performance Computer Architecture (HPCA'01), Nuevo Leone, Mexico, January 20-24, 2001: ) <|cite_end|>& 23\% & - & - & - & N/A \\ \hline
6. & Voltron & 10.5\% & - & - & - & N/A \\ \hline
7. & BAMM <|cite_start|> (Reference: 2018 IEEE International Conference on Consumer Electronics (ICCE): ) <|cite_end|>& 6.30\% & 0.70\% & - & - & N/A \\ \hline
8. & TAP-low <|cite_start|> (Reference: {An Approach for Adaptive DRAM Temperature and Power Management: High-performance DRAMs are providing increasing memory access bandwidth to processors, which is leading to high power consumption and operating temperature in DRAM chips. In this paper, we propose a customized low-power technique for high-performance DRAM systems to improve DRAM page hit rate by buffering write operations that may incur page misses. This approach reduces DRAM system power consumption and temperature without any performance penalty. We combine the throughput-aware page-hit-aware write buffer (TAP) with low-power-state-based techniques for further power and temperature reduction, namely, TAP-low. Our experiments show that a system with TAP-low could reduce the total DRAM power consumption by up to 68.6% (19.9% on average). The steady-state temperature can be reduced by as much as 7.84°C and 2.55°C on average across eight representative workloads.) <|cite_end|>& 19.90\% & 7.70\% & - & - & N/A \\ \hline
9. & \textbf{VAR-DRAM} & 29.54\% & 0.8\% & 0.002\% & 1.09\% & Preserved \\ \hline \hline
\end{tabular}
}
\caption{Summary of Proposed and Existing Works}
\label{tab:summary}
\end{table}
To regulate the supply voltage on a device, researchers widely use the concept of power gating <|cite_start|> (Reference: Gated-Vdd: A Circuit Technique to Reduce Leakage in Deep-Submicron Cache Memories: Deep-submicron CMOS designs have resulted in large leakage energy dissipation in microprocessors. While SRAM cells in on-chip cache memories always contribute to this leakage, there is a large variability in active cell usage both <italic>within</italic> and <italic>across</italic> appliðcations. This paper explores an integrated architectural and circuit-level approach to reducing leakage energy dissipation in instrucðtion caches. We propose, <italic>gated-V<subscrpt>dd</subscrpt>,</italic> a circuit-level technique to gate the supply voltage and reduce leakage in unused SRAM cells. Our results indicate that gated-V<subscrpt>dd</subscrpt> together with a novel resizable cache architecture reduces energy-delay by 62% with minimal impact on performance.) <|cite_end|>. This reduces the leakage current of the chip
which makes power gated \textit{static random access memories} (SRAM) circuits for implementing sleep mode, which is considered an efficient power management technique. In <|cite_start|> (Reference: 53RD IEEE INTERNATIONAL MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS: ) <|cite_end|>, the authors propose a \textit{quasi-power-gating} approach in order to reduce leakage power dissipation on SRAM banks. Other applications of power gating are observed in \textit{network-on-chip(s)} (NoCs) <|cite_start|> (Reference: 2015 ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC) PROCEEDINGS TABLE OF CONTENTS: ) <|cite_end|>and Non-Volatile (NV) memory technologies where the authors proposed a power gated 1 Mb NV embedded memory to optimize the trade-off between macro size and its operational power.
\subsection{Limitations of Existing Works}
We agree that most of the aforementioned prior works promise significant savings in terms of DRAM energy. Table~\ref{tab:summary} showcases most of the previously discussed works highlighting DRAM power savings in contrast to its respective additional hardware requirements. Due to the conventional structure of DRAM devices, most DRAM power or energy-saving works are directed towards a \textit{rank-aware} power management mechanism <|cite_start|> (Reference: Error Rate Estimation of DDR4-SDRAM Buffers in Space Mass Memories: DDR4-SDRAMs are key components widely used in modern computing systems on ground and in future space applications, where the sensitivity to ionizing radiation effects and the corresponding data integrity performance is of special interest. In this paper, an example DDR4-SDRAM buffer memory partition inside a high-performance mass memory system is described. For this buffer, two different EDAC implementations, which are Reed-Solomon single-symbol-error-correction and Reed-Solomon double-symbol-error-correction are compared in terms of data integrity performance. This comparison is based on the word error probabilities taking into account DDR4-SDRAM component specific single event effects and possible mitigation such as scrubbing and power cycling. The approach described in this work quantifies the design decision for a certain EDAC architecture as well as highlighting the impact of design parameters such as scrubbing and power cycling periods.) <|cite_end|> <|cite_start|> (Reference: Rank-Aware Dynamic Migrations and Adaptive Demotions for DRAM Power Management: Modern DRAM architectures allow a number of low-power states on individual memory ranks for advanced power management. Many previous studies have taken advantage of demotions on low-power states for energy saving. However, most of the demotion schemes are statically performed on a limited number of pre-selected low-power states, and are suboptimal for different workloads and memory architectures. Even worse, the idle periods are often too short for effective power state transitions, especially for memory intensive applications. Wrong decisions on power state transition incur significant energy and delay penalties. In this paper, we propose a novel memory system design named RAMZzz with rank-aware energy saving optimizations including dynamic page migrations and adaptive demotions. Specifically, we group the pages with similar access locality into the same rank with dynamic page migrations. Ranks have their hotness: hot ranks are kept busy for high utilization and cold ranks can have more lengthy idle periods for power state transitions. We further develop adaptive state demotions by considering all low-power states for each rank and a prediction model to estimate the power-down timeout among states. We experimentally compare our algorithm with other energy saving policies with cycle-accurate simulation. Experiments with benchmark workloads show that RAMZzz achieves significant improvement on energy-delay2 and energy consumption over other energy saving techniques.) <|cite_end|> <|cite_start|> (Reference: {Hardware and Software Techniques for Controlling DRAM Power Modes: The anticipated explosive growth of pervasive and mobile computing devices that are typically constrained by energy has brought hardware and software techniques for energy conservation into the spotlight. While there have been several studies and proposals for energy conservation for CPUs and peripherals, energy optimization techniques for selective operating mode control of DRAMs have not been fully explored. It has been shown that, for some systems, as much as 90 percent of overall system energy (excluding I/O) is consumed by the DRAM modules, thus, they serve as a good candidate for energy optimizations. Further, DRAM technology has also matured to provide several low energy operating modes (power modes), making it an opportunistic moment to conduct studies exploring the potential benefits of mode control techniques. This paper conducts an in-depth investigation of software and hardware techniques to take advantage of the DRAM mode control capabilities at a module granularity for energy savings. Using a memory system architecture capturing five different energy modes and corresponding resynchronization times, this paper presents several novel compilation techniques to both cluster the data across memory banks as well as to detect module idleness and perform energy mode transitions. In addition, hardware-assisted approaches (called self-monitoring) based on predictions of module interaccess times are proposed. These techniques are extensively evaluated using a set of a dozen benchmarks. It is shown that we get an average of 61 percent savings in DRAM energy using compiler-directed mode control. One of the self-monitored approaches gives as much as 89 percent savings (72 percent on the average), coming as close as 8.8 percent to the optimal energy savings that one can expect with DRAM module mode control. The optimization techniques are demonstrated to be invaluable for energy savings as memory technologies continue to evolve.) <|cite_end|> <|cite_start|> (Reference: Power aware page allocation: One of the major challenges of post-PC computing is the need to reduce energy consumption, thereby extending the lifetime of the batteries that power these mobile devies. Memory is a particularly important target for efforts to improve energy efficiency. Memory technology is becoming available that offers power management features such as the ability to put individual chips in any one of several different power modes. In this paper we explore the interaction of page placement with static and dynamic hardware policies to exploit these emerging hardware features. In particular, we consider page allocation policies that can be employed by an informed operating system to complement the hardware power management strategies. We perform experiments using two complementary simulation environments: a trace-driven simulator with workload traces that are representative of mobile computing and an execution-driven simulator with a detailed processor/memory model and a more memory-intensive set of benchmarks (SPEC2000). Our results make a compelling case for a cooperative hardware/software approach for exploiting power-aware memory, with down to as little as 45% of the Energy Delay for the best static policy and 1% to 20% of the Energy Delay for a traditional full-power memory.) <|cite_end|> <|cite_start|> (Reference: RAIDR: Retention-Aware Intelligent DRAM Refresh: Dynamic random-access memory (DRAM) is the building block of modern main memory systems. DRAM cells must be periodically refreshed to prevent loss of data. These refresh operations waste energy and degrade system performance by interfering with memory accesses. The negative effects of DRAM refresh increase as DRAM device capacity increases. Existing DRAM devices refresh all cells at a rate determined by the leakiest cell in the device. However, most DRAM cells can retain data for significantly longer. Therefore, many of these refreshes are unnecessary. In this paper, we propose RAIDR (Retention-Aware Intelligent DRAM Refresh), a low-cost mechanism that can identify and skip unnecessary refreshes using knowledge of cell retention times. Our key idea is to group DRAM rows into retention time bins and apply a different refresh rate to each bin. As a result, rows containing leaky cells are refreshed as frequently as normal, while most rows are refreshed less frequently. RAIDR uses Bloom filters to efficiently implement retention time bins. RAIDR requires no modification to DRAM and minimal modification to the memory controller. In an 8-core system with 32 GB DRAM, RAIDR achieves a 74.6% refresh reduction, an average DRAM power reduction of 16.1%, and an average system performance improvement of 8.6% over existing systems, at a modest storage overhead of 1.25 KB in the memory controller. RAIDR's benefits are robust to variation in DRAM system configuration, and increase as memory capacity increases.) <|cite_end|>. Only a handful of works are directed towards saving power from DRAM banks <|cite_start|> (Reference: Proceedings of the Seventh International Symposium on High-Performance Computer Architecture (HPCA'01), Nuevo Leone, Mexico, January 20-24, 2001: ) <|cite_end|> <|cite_start|> (Reference: 20th International Symposium on Quality Electronic Design, ISQED 2019, Santa Clara, CA, USA, March 6-7, 2019: ) <|cite_end|>. Compiler-directed power management <|cite_start|> (Reference: Proceedings of the Seventh International Symposium on High-Performance Computer Architecture (HPCA'01), Nuevo Leone, Mexico, January 20-24, 2001: ) <|cite_end|>may not be always feasible as it'll limit the distribution of pre-compiled files. Power management mechanism at a finer granularity than ranks is becoming a necessity for memory devices as the demand for low-powered electronics and mobile devices are escalated over the last decade. DRAM manufacturers have partially addressed this issue by introducing per-bank refreshing in low-powered DDR devices. However, there seems to be further room for improvement regarding this respect. The DRAM device's capacity may not be completely filled constantly. However, both static and dynamic power in terms of background and refreshes would be consumed by these devices. This constitutes our first observation from prior works.
The concept of trading off memory capacity for lower latency is not new. Choi et al. <|cite_start|> (Reference: {Multiple Clone Row DRAM: A Low Latency and Area Optimized DRAM: Several previous works have changed DRAM bank structure to reduce memory access latency and have shown performance improvement. However, changes in the area-optimized DRAM bank can incur large area-overhead. To solve this problem, we propose Multiple Clone Row DRAM (MCR-DRAM), which uses existing DRAM bank structure without any modification.) <|cite_end|>propose a mechanism to clone multiple rows which in turn allows faster accesses. This is due to the fact that the speed of the sensing process increases as the number of sensed-cells is more. Luo et al. propose CLR-DRAM | [
"<|reference_start|> {Process Variation-Aware Nonuniform Cache Management in a 3D Die-Stacked Multicore Processor: Process variations in integrated circuits have significant impact on their performance, leakage, and stability. This is particularly evident in large, regular, and dense structures such as DRAMs. D... <|reference_end|>",
"<|reference_start|> {Hardware and Software Techniques for Controlling DRAM Power Modes: The anticipated explosive growth of pervasive and mobile computing devices that are typically constrained by energy has brought hardware and software techniques for energy conservation into the spotlight. While there have been several studies and proposals for energy conservation for CPUs and peripherals, energy optimization techniques for selective operating mode control of DRAMs have not been fully explored. It has been shown that, for some systems, as much as 90 percent of overall system energy (excluding I/O) is consumed by the DRAM modules, thus, they serve as a good candidate for energy optimizations. Further, DRAM technology has also matured to provide several low energy operating modes (power modes), making it an opportunistic moment to conduct studies exploring the potential benefits of mode control techniques. This paper conducts an in-depth investigation of software and hardware techniques to take advantage of the DRAM mode control capabilities at a module granularity for energy savings. Using a memory system architecture capturing five different energy modes and corresponding resynchronization times, this paper presents several novel compilation techniques to both cluster the data across memory banks as well as to detect module idleness and perform energy mode transitions. In addition, hardware-assisted approaches (called self-monitoring) based on predictions of module interaccess times are proposed. These techniques are extensively evaluated using a set of a dozen benchmarks. It is shown that we get an average of 61 percent savings in DRAM energy using compiler-directed mode control. One of the self-monitored approaches gives as much as 89 percent savings (72 percent on the average), coming as close as 8.8 percent to the optimal energy savings that one can expect with DRAM module mode control. The optimization techniques are demonstrated to be invaluable for energy savings as memory technologies continue to evolve. <|reference_end|>",
"<|reference_start|> 20th International Symposium on Quality Electronic Design, ISQED 2019, Santa Clara, CA, USA, March 6-7, 2019: <|reference_end|>",
"<|reference_start|> 2017 IEEE International Symposium on High Performance Computer Architecture, HPCA 2017, Austin, TX, USA, February 4-8, 2017: <|reference_end|>"
] | [
3,
7,
25,
44
] | {"<|multi_cite_1_3|>": "ss-1276459", "<|multi_cite_2_2|>": "ss-1831330", "<|multi_cite_4_1|>": "ss-2418682", "<|multi_cite_4_2|>": "ss-2418683", "<|multi_cite_5_2|>": "ss-2418684", "<|multi_cite_6_1|>": "ss-2418684", "<|multi_cite_6_2|>": "ss-1464896", "<|multi_cite_7_1|>": "ss-2418684", "<|multi_cite_7_3|>": "ss-1831330", "<|multi_cite_8_1|>": "ss-2418682", "<|multi_cite_8_2|>": "ss-2418683", "<|cite_9|>": "ss-2418682", "<|multi_cite_10_1|>": "ss-2418683", "<|cite_11|>": "ss-1383011", "<|cite_13|>": "ss-1464896", "<|cite_15|>": "ss-2418684", "<|cite_16|>": "ss-1693224", "<|multi_cite_17_1|>": "ss-1693224", "<|multi_cite_17_2|>": "ss-2418684", "<|cite_19|>": "ss-2418685", "<|cite_20|>": "ss-1383011", "<|multi_cite_21_1|>": "ss-1950564", "<|multi_cite_21_2|>": "ss-2418686", "<|cite_22|>": "ss-1535409", "<|multi_cite_23_1|>": "ss-1464896", "<|multi_cite_23_2|>": "ss-784430", "<|multi_cite_26_2|>": "ss-1831330", "<|cite_28|>": "ss-799125", "<|multi_cite_30_1|>": "ss-2418683", "<|multi_cite_30_2|>": "ss-2418682", "<|cite_31|>": "ss-1095497", "<|cite_32|>": "ss-1722743", "<|cite_33|>": "ss-1455382", "<|cite_34|>": "arxiv-165833", "<|multi_cite_35_1|>": "ss-2418682", "<|multi_cite_35_2|>": "ss-2418683", "<|cite_36|>": "ss-2418683", "<|cite_37|>": "ss-2418682", "<|cite_38|>": "arxiv-165820", "<|multi_cite_40_1|>": "ss-1535409", "<|multi_cite_40_2|>": "ss-2418687", "<|multi_cite_40_3|>": "ss-2418688", "<|cite_41|>": "ss-2418688", "<|cite_42|>": "arxiv-66284", "<|cite_43|>": "ss-1087210", "<|cite_44|>": "ss-1383011", "<|cite_45|>": "ss-784430", "<|cite_46|>": "ss-1464896", "<|cite_48|>": "ss-1950564", "<|cite_49|>": "ss-2418686", "<|cite_50|>": "ss-2418689", "<|cite_51|>": "ss-2418690", "<|cite_52|>": "ss-769276", "<|multi_cite_54_1|>": "ss-1831330", "<|multi_cite_54_2|>": "arxiv-66284", "<|multi_cite_54_3|>": "ss-2418684", "<|multi_cite_54_4|>": "ss-1693224", "<|multi_cite_54_5|>": "ss-1383011", "<|multi_cite_55_1|>": "ss-1464896", "<|multi_cite_55_2|>": "ss-784430", "<|cite_56|>": "ss-1464896", "<|cite_57|>": "ss-690840", "<|cite_58|>": "arxiv-267708", "<|multi_cite_59_1|>": "ss-2418683", "<|multi_cite_59_2|>": "ss-1095497", "<|multi_cite_59_3|>": "arxiv-165820", "<|cite_60|>": "ss-2418682"} |
2212.02085 | <|paper_start|> Title: RGB-L: Enhancing Indirect Visual SLAM using LiDAR-based Dense Depth Maps
Abstract: RGB-L: Enhancing Indirect Visual SLAM using LiDAR-based Dense Depth Maps: In this paper, we present a novel method for integrating 3D LiDAR depth measurements into the existing ORB-SLAM3 by building upon the RGB-D mode. We propose and compare two methods of depth map generation: conventional computer vision methods, namely an inverse dilation operation, and a supervised deep learning-based approach. We integrate the former directly into the ORB-SLAM3 framework by adding a so-called RGB-L (LiDAR) mode that directly reads LiDAR point clouds. The proposed methods are evaluated on the KITTI Odometry dataset and compared to each other and the standard ORB-SLAM3 stereo method. We demonstrate that, depending on the environment, advantages in trajectory accuracy and robustness can be achieved. Furthermore, we demonstrate that the runtime of the ORB-SLAM3 algorithm can be reduced by more than 40 % compared to the stereo mode. The related code for the ORB-SLAM3 RGB-L mode will be available as open-source software under https://github.com/TUMFTM/ORB SLAM3 RGBL.
Introduction
\label{sec:introduction}
Robust and precise localization is needed for all modules of autonomous vehicle software, such as path planning, trajectory following, or object prediction. To enable full self-driving capabilities, a reliance on satellite-based global localization systems such as the Global Positioning System (GPS) is not feasible. First, such sensors are expensive; second, sensor dropouts can lead to subsequent system failures. Therefore, simultaneous localization and mapping (SLAM) algorithms are important in developing robots and autonomous vehicles <|cite_start|> (Reference: Simultaneous localization and mapping (SLAM): part II: This paper discusses the recursive Bayesian formulation of the simultaneous localization and mapping (SLAM) problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. The paper focuses on three key areas: computational complexity; data association; and environment representation) <|cite_end|> <|cite_start|> (Reference: Closed form solutions to the multiple-platform simultaneous localization and map building (SLAM) problem: This paper presents a closed form solution to the multiple platform simultaneous localization and map building (SLAM) problem. Closed form solutions are presented in both state space and information based forms. A key conclusion of this paper is that the information-state based form offers many advantages over the state space formulation in allowing the SLAM algorithm to be decentralized across multiple platforms.) <|cite_end|>. These algorithms can provide a precise robot pose by only using sensor measurements of the environment and therefore provide an important localization technique. To achieve robust SLAM algorithms, it is necessary to include the environmental information from various sensors such as LiDARs and cameras. Since all sensors have their own individual advantages and disadvantages, sensor fusion can be a huge benefit for the localization of autonomous vehicles <|cite_start|> (Reference: A Review of Sensor Technologies for Perception in Automated Driving: After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show the intention of the market in sensors technologies for Automated Vehicles.) <|cite_end|>.
This paper presents an enhancement to the well-known \textit{ORB-SLAM3} <|cite_start|> (Reference: ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual--Inertial, and Multimap SLAM: This article presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multimap SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models. The first main novelty is a tightly integrated visual-inertial SLAM system that fully relies on maximum a posteriori (MAP) estimation, even during IMU initialization, resulting in real-time robust operation in small and large, indoor and outdoor environments, being two to ten times more accurate than previous approaches. The second main novelty is a multiple map system relying on a new place recognition method with improved recall that lets ORB-SLAM3 survive to long periods of poor visual information: when it gets lost, it starts a new map that will be seamlessly merged with previous maps when revisiting them. Compared with visual odometry systems that only use information from the last few seconds, ORB-SLAM3 is the first system able to reuse in all the algorithm stages all previous information from high parallax co-visible keyframes, even if they are widely separated in time or come from previous mapping sessions, boosting accuracy. Our experiments show that, in all sensor configurations, ORB-SLAM3 is as robust as the best systems available in the literature and significantly more accurate. Notably, our stereo-inertial SLAM achieves an average accuracy of 3.5 cm in the EuRoC drone and 9 mm under quick hand-held motions in the room of TUM-VI dataset, representative of AR/VR scenarios. For the benefit of the community we make public the source code.) <|cite_end|> algorithm. Currently, this algorithm relies solely on camera data to calculate the robot's pose. The goal is to enhance this algorithm with LiDAR sensor depth measurements to achieve a higher localization accuracy and better robustness e.g. in urban environments. To summarize, this paper comprises four main contributions:
\begin{itemize}
\item We present a method to integrate LiDAR depth measurements into the existing \textit{ORB-SLAM3} algorithm.
\item We propose and compare two methods of dense depth map generation from LiDAR point clouds.
\item We present a variety of experiments for localization of an autonomous vehicle that demonstrates the improved accuracy and robustness of our method.
\item We compare the runtimes and show a decrease of more than \SI{40}{\percent} compared to stereo mode.
\end{itemize} <|paper_end|> | [
"<|reference_start|> Simultaneous localization and mapping (SLAM): part II: This paper discusses the recursive Bayesian formulation of the simultaneous localization and mapping (SLAM) problem in which probability distributions or estimates of absolute or relative locations of landmarks and vehicle pose are obtained. The paper focuses on three key areas: computational complexity; data association; and environment representation <|reference_end|>",
"<|reference_start|> Closed form solutions to the multiple-platform simultaneous localization and map building (SLAM) problem: This paper presents a closed form solution to the multiple platform simultaneous localization and map building (SLAM) problem. Closed form solutions are presented in both state space and information based forms. A key conclusion of this paper is that the information-state based form offers many advantages over the state space formulation in allowing the SLAM algorithm to be decentralized across multiple platforms. <|reference_end|>",
"<|reference_start|> A Review of Sensor Technologies for Perception in Automated Driving: After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show the intention of the market in sensors technologies for Automated Vehicles. <|reference_end|>",
"<|reference_start|> ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual--Inertial, and Multimap SLAM: This article presents ORB-SLAM3, the first system able to perform visual, visual-inertial and multimap SLAM with monocular, stereo and RGB-D cameras, using pin-hole and fisheye lens models. The first main novelty is a tightly integrated visual-inertial SLAM system that fully relies on maximum a posteriori (MAP) estimation, even during IMU initialization, resulting in real-time robust operation in small and large, indoor and outdoor environments, being two to ten times more accurate than previous approaches. The second main novelty is a multiple map system relying on a new place recognition method with improved recall that lets ORB-SLAM3 survive to long periods of poor visual information: when it gets lost, it starts a new map that will be seamlessly merged with previous maps when revisiting them. Compared with visual odometry systems that only use information from the last few seconds, ORB-SLAM3 is the first system able to reuse in all the algorithm stages all previous information from high parallax co-visible keyframes, even if they are widely separated in time or come from previous mapping sessions, boosting accuracy. Our experiments show that, in all sensor configurations, ORB-SLAM3 is as robust as the best systems available in the literature and significantly more accurate. Notably, our stereo-inertial SLAM achieves an average accuracy of 3.5 cm in the EuRoC drone and 9 mm under quick hand-held motions in the room of TUM-VI dataset, representative of AR/VR scenarios. For the benefit of the community we make public the source code. <|reference_end|>"
] | [
0,
1,
2,
3
] | {"<|multi_cite_1_1|>": "ss-746084", "<|multi_cite_1_2|>": "ss-1090379", "<|cite_2|>": "ss-773112", "<|cite_3|>": "ss-735936"} |
2109.01494-0 | <|paper_start|> Title: Computing Graph Descriptors on Edge Streams
Abstract: Computing Graph Descriptors on Edge Streams: Feature extraction is an essential task in graph analytics. These feature vectors, called graph descriptors, are used in downstream vector-space-based graph analysis models. This idea has proved fruitful in the past, with spectral-based graph descriptors providing state-of-the-art classification accuracy. However, known algorithms to compute meaningful descriptors do not scale to large graphs since: (1) they require storing the entire graph in memory, and (2) the end-user has no control over the algorithm's runtime. In this paper, we present streaming algorithms to approximately compute three different graph descriptors capturing the essential structure of graphs. Operating on edge streams allows us to avoid storing the entire graph in memory, and controlling the sample size enables us to keep the runtime of our algorithms within desired bounds. We demonstrate the efficacy of the proposed descriptors by analyzing the approximation error and classification accuracy. Our scalable algorithms compute descriptors of graphs with millions of edges within minutes. Moreover, these descriptors yield predictive accuracy comparable to the state-of-the-art methods but can be computed using only 25% as much memory.
Introduction
\label{sec:intro}
Graph analysis has a wide array of applications in various domains, from classifying chemicals based on their carcinogenicity <|cite_start|> (Reference: The Predictive Toxicology Challenge 2000-2001: We initiated the Predictive Toxicology Challenge (PTC) to stimulate the development of advanced SAR techniques for predictive toxicology models. The goal of this challenge is to predict the rodent carcinogenicity of new compounds based on the experimental results of the US National Toxicology Program (NTP). Submissions will be evaluated on quantitative and qualitative scales to select the most predictive models and those with the highest toxicological relevance. Availability: http://www.informatik.uni-freiburg.de/∼ml/ptc/ Contact: [email protected].) <|cite_end|>to determining the community structure in a friendship network <|cite_start|> (Reference: {Deep graph kernels: In this paper, we present Deep Graph Kernels, a unified framework to learn latent representations of sub-structures for graphs, inspired by latest advancements in language modeling and deep learning. Our framework leverages the dependency information between sub-structures by learning their latent representations. We demonstrate instances of our framework on three popular graph kernels, namely Graphlet kernels, Weisfeiler-Lehman subtree kernels, and Shortest-Path graph kernels. Our experiments on several benchmark datasets show that Deep Graph Kernels achieve significant improvements in classification accuracy over state-of-the-art graph kernels.) <|cite_end|>and even detecting discontinuities within instant messaging interactions <|cite_start|> (Reference: Network similarity via multiple social theories: Given a set of k networks, possibly with different sizes and no overlaps in nodes or links, how can we quickly assess similarity between them? Analogously, are there a set of social theories which, when represented by a small number of descriptive, numerical features, effectively serve as a “signature” for the network? Having such signatures will enable a wealth of graph mining and social network analysis tasks, including clustering, outlier detection, visualization, etc. We propose a novel, effective, and scalable method, called NetSimile, for solving the above problem. Our approach has the following desirable properties: (a) It is supported by a set of social theories. (b) It gives similarity scores that are size-invariant. (c) It is scalable, being linear on the number of links for graph signature extraction. In extensive experiments on numerous synthetic and real networks from disparate domains, NetSimile outperforms baseline competitors. We also demonstrate how our approach enables several mining tasks such as clustering, visualization, discontinuity detection, network transfer learning, and re-identification across networks.) <|cite_end|>. The fundamental building block for analysis is a pairwise similarity (or distance) measure between graphs. However, efficient computation of such a measure is challenging: even the best-known solution for determining whether a pair of graphs are isomorphic has a quasi-polynomial runtime. Similarly, computing Graph Edit Distance <|cite_start|> (Reference: A Distance Measure between Attributed Relational Graphs for Pattern Recognition: A method to determine a distance measure between two nonhierarchical attributed relational graphs is presented. In order to apply this distance measure, the graphs are characterised by descriptive graph grammars (DGG). The proposed distance measure is based on the computation of the minimum number of modifications required to transform an input graph into the reference one. Specifically, the distance measure is defined as the cost of recognition of nodes plus the number of transformations which include node insertion, node deletion, branch insertion, branch deletion, node label substitution and branch label substitution. The major difference between the proposed distance measure and the other ones is the consideration of the cost of recognition of nodes in the distance computation. In order to do this, the principal features of the nodes are described by one or several cost functions which are used to compute the similarity between the input nodes and the reference ones. Finally, an application of this distance measure to the recognition of lower case handwritten English characters is presented.) <|cite_end|>, the minimum number of node/edge addition/deletions to interchange between two graphs is \textsc{NP-Hard}.
A relatively pragmatic approach is constructing fixed dimensional descriptors (vector embeddings) for graphs, allowing classical data mining algorithms that operate on vector spaces. Existing models using this approach can be categorized into (1) supervised models, which use deep learning methods to construct vector embeddings based on optimizing a given objective function <|cite_start|> (Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks: In recent years, graph neural networks (GNNs) have emerged as a powerful neural architecture to learn vector representations of nodes and graphs in a supervised, end-to-end fashion. Up to now, GNNs have only been evaluated empirically -- showing promising results. The following work investigates GNNs from a theoretical point of view and relates them to the $1$-dimensional Weisfeiler-Leman graph isomorphism heuristic ($1$-WL). We show that GNNs have the same expressiveness as the $1$-WL in terms of distinguishing non-isomorphic (sub-)graphs. Hence, both algorithms also have the same shortcomings. Based on this, we propose a generalization of GNNs, so-called $k$-dimensional GNNs ($k$-GNNs), which can take higher-order graph structures at multiple scales into account. These higher-order structures play an essential role in the characterization of social networks and molecule graphs. Our experimental evaluation confirms our theoretical findings as well as confirms that higher-order information is useful in the task of graph classification and regression.) <|cite_end|> <|cite_start|> (Reference: How Powerful are Graph Neural Networks?: Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.) <|cite_end|> <|cite_start|> (Reference: Toward understanding and evaluating structural node embeddings: While most network embedding techniques model the proximity between nodes in a network, recently there has been significant interest in structural embeddings that are based on node equivalences, a notion rooted in sociology: equivalences or positions are collections of nodes that have similar roles—i.e., similar functions, ties or interactions with nodes in other positions—irrespective of their distance or reachability in the network. Unlike the proximity-based methods that are rigorously evaluated in the literature, the evaluation of structural embeddings is less mature. It relies on small synthetic or real networks with labels that are not perfectly defined, and its connection to sociological equivalences has hitherto been vague and tenuous. With new node embedding methods being developed at a breakneck pace, proper evaluation, and systematic characterization of existing approaches will be essential to progress. To fill in this gap, we set out to understand what types of equivalences structural embeddings capture. We are the first to contribute rigorous intrinsic and extrinsic evaluation methodology for structural embeddings, along with carefully-designed, diverse datasets of varying sizes. We observe a number of different evaluation variables that can lead to different results (e.g., choice of similarity measure, classifier, and label definitions). We find that degree distributions within nodes’ local neighborhoods can lead to simple yet effective baselines in their own right and guide the future development of structural embedding. We hope that our findings can influence the design of further node embedding methods and also pave the way for more comprehensive and fair evaluation of structural embedding methods.) <|cite_end|>and (2) unsupervised models, which are based on graph-theoretic properties such as degree <|cite_start|> (Reference: Stochastic Graphlet Embedding: Graph-based methods are known to be successful in many machine learning and pattern classification tasks. These methods consider semi-structured data as graphs where nodes correspond to primitives (parts, interest points, segments, etc.) and edges characterize the relationships between these primitives. However, these non-vectorial graph data cannot be straightforwardly plugged into off-the-shelf machine learning algorithms without a preliminary step of -- explicit/implicit -- graph vectorization and embedding. This embedding process should be resilient to intra-class graph variations while being highly discriminant. In this paper, we propose a novel high-order stochastic graphlet embedding (SGE) that maps graphs into vector spaces. Our main contribution includes a new stochastic search procedure that efficiently parses a given graph and extracts/samples unlimitedly high-order graphlets. We consider these graphlets, with increasing orders, to model local primitives as well as their increasingly complex interactions. In order to build our graph representation, we measure the distribution of these graphlets into a given graph, using particular hash functions that efficiently assign sampled graphlets into isomorphic sets with a very low probability of collision. When combined with maximum margin classifiers, these graphlet-based representations have positive impact on the performance of pattern comparison and recognition as corroborated through extensive experiments using standard benchmark databases.) <|cite_end|> <|cite_start|> (Reference: Hunt for the unique, stable, sparse and fast feature learning on graphs: For the purpose of learning on graphs, we hunt for a graph feature representation that exhibit certain uniqueness, stability and sparsity properties while also being amenable to fast computation. This leads to the discovery of family of graph spectral distances (denoted as FGSD) and their based graph feature representations, which we prove to possess most of these desired properties. To both evaluate the quality of graph features produced by FGSD and demonstrate their utility, we apply them to the graph classification problem. Through extensive experiments, we show that a simple SVM based classification algorithm, driven with our powerful FGSD based graph features, significantly outperforms all the more sophisticated state-of-art algorithms on the unlabeled node datasets in terms of both accuracy and speed; it also yields very competitive results on the labeled datasets - despite the fact it does not utilize any node label information.) <|cite_end|>, the Laplacian eigenspectrum <|cite_start|> (Reference: The multiscale laplacian graph kernel: Many real world graphs, such as the graphs of molecules, exhibit structure at multiple different scales, but most existing kernels between graphs are either purely local or purely global in character. In contrast, by building a hierarchy of nested subgraphs, the Multiscale Laplacian Graph kernels (MLG kernels) that we define in this paper can account for structure at a range of different scales. At the heart of the MLG construction is another new graph kernel, called the Feature Space Laplacian Graph kernel (FLG kernel), which has the property that it can lift a base kernel defined on the vertices of two graphs to a kernel between the graphs. The MLG kernel applies such FLG kernels to subgraphs recursively. To make the MLG kernel computationally feasible, we also introduce a randomized projection procedure, similar to the Nystro m method, but for RKHS operators.) <|cite_end|>, or the distribution of a fixed number of subgraphs <|cite_start|> (Reference: Weisfeiler-Lehman graph kernels: In this article, we propose a family of efficient kernels for large graphs with discrete node labels. Key to our method is a rapid feature extraction scheme based on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequence of graphs, whose node attributes capture topological and label information. A family of kernels can be defined based on this Weisfeiler-Lehman sequence of graphs, including a highly efficient kernel comparing subtree-like patterns. Its runtime scales only linearly in the number of edges of the graphs and the length of the Weisfeiler-Lehman graph sequence. In our experimental evaluation, our kernels outperform state-of-the-art graph kernels on several graph classification benchmark data sets in terms of accuracy and runtime. Our kernels open the door to large-scale applications of graph kernels in various disciplines such as computational biology and social network analysis.) <|cite_end|> <|cite_start|> (Reference: Efficient graphlet kernels for large graph comparison: State-of-the-art graph kernels do not scale to large graphs with hundreds of nodes and thousands of edges. In this article we propose to compare graphs by counting graphlets, i.e., subgraphs with k nodes where k ∈ {3, 4, 5}. Exhaustive enumeration of all graphlets being prohibitively expensive, we introduce two theoretically grounded speedup schemes, one based on sampling and the second one specifically designed for bounded degree graphs. In our experimental evaluation, our novel kernels allow us to efficiently compare large graphs that cannot be tackled by existing graph kernels.) <|cite_end|> <|cite_start|> (Reference: Interpretable multi-scale graph descriptors via structural compression: ) <|cite_end|> <|cite_start|> (Reference: Network Embedding via Motifs: Network embedding has emerged as an effective way to deal with downstream tasks, such as node classification [16, 31, 42]. Most existing methods leverage multi-similarities between nodes such as connectivity, which considers vertices that are closely connected to be similar and structural similarity, which is measured by assessing their relations to neighbors; while these methods only focus on static graphs. In this work, we bridge connectivity and structural similarity in a uniform representation via motifs, and consequently present an algorithm for Learning Embeddings by leveraging Motifs Of Networks (LEMON), which aims to learn embeddings for vertices and various motifs. Moreover, LEMON is inherently capable of dealing with inductive learning tasks for dynamic graphs. To validate the effectiveness and efficiency, we conduct various experiments on two real-world datasets and five public datasets from diverse domains. Through comparison with state-of-the-art baseline models, we find that LEMON achieves significant improvements in downstream tasks. We release our code on Github at https://github.com/larry2020626/LEMON.) <|cite_end|> <|cite_start|> (Reference: Mining Largest Maximal Quasi-Cliques: Quasi-cliques are dense incomplete subgraphs of a graph that generalize the notion of cliques. Enumerating quasi-cliques from a graph is a robust way to detect densely connected structures with applications in bioinformatics and social network analysis. However, enumerating quasi-cliques in a graph is a challenging problem, even harder than the problem of enumerating cliques. We consider the enumeration of top-k degree-based quasi-cliques and make the following contributions: (1) we show that even the problem of detecting whether a given quasi-clique is maximal (i.e., not contained within another quasi-clique) is NP-hard. (2) We present a novel heuristic algorithm KernelQC to enumerate the k largest quasi-cliques in a graph. Our method is based on identifying kernels of extremely dense subgraphs within a graph, followed by growing subgraphs around these kernels, to arrive at quasi-cliques with the required densities. (3) Experimental results show that our algorithm accurately enumerates quasi-cliques from a graph, is much faster than current state-of-the-art methods for quasi-clique enumeration (often more than three orders of magnitude faster), and can scale to larger graphs than current methods.) <|cite_end|> <|cite_start|> (Reference: Density Guarantee on Finding Multiple Subgraphs and Subtensors: Dense subregion (subgraph & subtensor) detection is a well-studied area, with a wide range of applications, and numerous efficient approaches and algorithms have been proposed. Approximation approaches are commonly used for detecting dense subregions due to the complexity of the exact methods. Existing algorithms are generally efficient for dense subtensor and subgraph detection, and can perform well in many applications. However, most of the existing works utilize the state-or-the-art greedy 2-approximation algorithm to capably provide solutions with a loose theoretical density guarantee. The main drawback of most of these algorithms is that they can estimate only one subtensor, or subgraph, at a time, with a low guarantee on its density. While some methods can, on the other hand, estimate multiple subtensors, they can give a guarantee on the density with respect to the input tensor for the first estimated subsensor only. We address these drawbacks by providing both theoretical and practical solution for estimating multiple dense subtensors in tensor data and giving a higher lower bound of the density. In particular, we guarantee and prove a higher bound of the lower-bound density of the estimated subgraph and subtensors. We also propose a novel approach to show that there are multiple dense subtensors with a guarantee on its density that is greater than the lower bound used in the state-of-the-art algorithms. We evaluate our approach with extensive experiments on several real-world datasets, which demonstrates its efficiency and feasibility.) <|cite_end|>.
Unsupervised models construct general-purpose descriptors and do not require prior training on datasets. This approach has yielded great success; for example, descriptors based on spectral features (i.e., the graph's Laplacian) provide excellent results on benchmark graph classification datasets <|cite_start|> (Reference: NetLSD: Hearing the Shape of a Graph: Comparison among graphs is ubiquitous in graph analytics. However, it is a hard task in terms of the expressiveness of the employed similarity measure and the efficiency of its computation. Ideally, graph comparison should be invariant to the order of nodes and the sizes of compared graphs, adaptive to the scale of graph patterns, and scalable. Unfortunately, these properties have not been addressed together. Graph comparisons still rely on direct approaches, graph kernels, or representation-based methods, which are all inefficient and impractical for large graph collections. In this paper, we propose the Network Laplacian Spectral Descriptor (NetLSD): the first, to our knowledge, permutation- and size-invariant, scale-adaptive, and efficiently computable graph representation method that allows for straightforward comparisons of large graphs. NetLSD extracts a compact signature that inherits the formal properties of the Laplacian spectrum, specifically its heat or wave kernel; thus, it hears the shape of a graph. Our evaluation on a variety of real-world graphs demonstrates that it outperforms previous works in both expressiveness and efficiency.) <|cite_end|> <|cite_start|> (Reference: Hunt for the unique, stable, sparse and fast feature learning on graphs: For the purpose of learning on graphs, we hunt for a graph feature representation that exhibit certain uniqueness, stability and sparsity properties while also being amenable to fast computation. This leads to the discovery of family of graph spectral distances (denoted as FGSD) and their based graph feature representations, which we prove to possess most of these desired properties. To both evaluate the quality of graph features produced by FGSD and demonstrate their utility, we apply them to the graph classification problem. Through extensive experiments, we show that a simple SVM based classification algorithm, driven with our powerful FGSD based graph features, significantly outperforms all the more sophisticated state-of-art algorithms on the unlabeled node datasets in terms of both accuracy and speed; it also yields very competitive results on the labeled datasets - despite the fact it does not utilize any node label information.) <|cite_end|>. The order (number of vertices) and size (number of edges) of the graph and the number and nature of features computed directly determine the runtime and memory costs of the methods. By computing more statistics, one can construct more expressive descriptors. However, this approach does not scale well to real-world graphs due to their growing magnitudes.
Instead of storing and processing the entire graph, processing graphs as streams---one edge at a time---is a viable approach for limited memory settings <|cite_start|> (Reference: Network Sampling: From Static to Streaming Graphs: Network sampling is integral to the analysis of social, information, and biological networks. Since many real-world networks are massive in size, continuously evolving, and/or distributed in nature, the network structure is often sampled in order to facilitate study. For these reasons, a more thorough and complete understanding of network sampling is critical to support the field of network science. In this paper, we outline a framework for the general problem of network sampling, by highlighting the different objectives, population and units of interest, and classes of network sampling methods. In addition, we propose a spectrum of computational models for network sampling methods, ranging from the traditionally studied model based on the assumption of a static domain to a more challenging model that is appropriate for streaming domains. We design a family of sampling methods based on the concept of graph induction that generalize across the full spectrum of computational models (from static to streaming) while efficiently preserving many of the topological properties of the input graphs. Furthermore, we demonstrate how traditional static sampling algorithms can be modified for graph streams for each of the three main classes of sampling methods: node, edge, and topology-based sampling. Our experimental results indicate that our proposed family of sampling methods more accurately preserves the underlying properties of the graph for both static and streaming graphs. Finally, we study the impact of network sampling algorithms on the parameter estimation and performance evaluation of relational classification algorithms.) <|cite_end|>. The features are approximated from a representative sample of fixed size. This approach of trading-off accuracy for time and space complexity has yielded promising results on various graph analysis tasks such as graphlet counting <|cite_start|> (Reference: A Unified Framework to Estimate Global and Local Graphlet Counts for Streaming Graphs: Counting small connected subgraph patterns called graphlets is emerging as a powerful tool for exploring topological structure of networks and for analysis of roles of individual nodes. Graphlets have numerous applications ranging from biology to network science. Computing graphlet counts for "dynamic graphs" is highly challenging due to the streaming nature of the input, sheer size of the graphs, and superlinear time complexity of the problem. Few practical results are known under the massive streaming graphs setting. In this work, we propose a "unified framework" to estimate the graphlet counts of the whole graph as well as the graphlet counts of individual nodes under the streaming graph setting. Our framework subsumes previous methods and provides more flexible and accurate estimation of the graphlet counts. We propose a general unbiased estimator which can be applied to any k-node graphlets. Furthermore, efficient implementation is provided for the 3, 4-node graphlets. We perform detailed empirical study on real-world graphs, and show that our framework produces estimation of graphlet count for streaming graphs with 1.7 to 170.8 times smaller error compared with other state-of-the-art methods. Our framework also achieves high accuracy on the estimation of graphlets for each individual node which previous works could not achieve.) <|cite_end|>, butterfly counting <|cite_start|> (Reference: sGrapp: Butterfly Approximation in Streaming Graphs: We study the fundamental problem of butterfly (i.e. (2,2)-bicliques) counting in bipartite streaming graphs. Similar to triangles in unipartite graphs, enumerating butterflies is crucial in understanding the structure of bipartite graphs. This benefits many applications where studying the cohesion in a graph shaped data is of particular interest. Examples include investigating the structure of computational graphs or input graphs to the algorithms, as well as dynamic phenomena and analytic tasks over complex real graphs. Butterfly counting is computationally expensive, and known techniques do not scale to large graphs; the problem is even harder in streaming graphs. In this paper, following a data-driven methodology, we first conduct an empirical analysis to uncover temporal organizing principles of butterflies in real streaming graphs and then we introduce an approximate adaptive window-based algorithm, sGrapp, for counting butterflies as well as its optimized version sGrapp-x. sGrapp is designed to operate efficiently and effectively over any graph stream with any temporal behavior. Experimental studies of sGrapp and sGrapp-x show superior performance in terms of both accuracy and efficiency.) <|cite_end|> <|cite_start|> (Reference: {FLEET:: The shipping of goods around the world is continually increasing, especially since the onset of the coronavirus disease 2019 (COVID-19) pandemic. If you don’t live in a port city such as Seattle, it’s hard to imagine the enormity of commerce and its impacts. Mary Iverson’s artwork raises questions about the consequences of the growing consumerism, particularly how carbon footprints of the shipping industry contribute to climate change. Fleet illustrates a post-apocalyptic vision of what rising sea levels would look like in our cities. In the depicted great flood, a group of stranded container ships (the backbone of today’s global trade) are floating around, calling attention to consumerism and its huge impacts on climate change. Imagining the big flood in cities with floating shipping containers in a climate-changing world, Fleet, a post-apocalyptic vision, asks us to consider our growing demand, consumerism, and their environmental impacts.) <|cite_end|>, and triangle counting <|cite_start|> (Reference: Tri-Fly: Distributed Estimation of Global and Local Triangle Counts in Graph Streams: ) <|cite_end|> <|cite_start|> (Reference: E-TRI: E-Vehicle Testbed Routing Infrastructure: Routing long trips of electric vehicles (EVs) is in growing demand and a non-trivial task as charging stops have to be planned along the way. Developing and testing realistic EV routing algorithms is challenging as multiple factors have to be considered such as traffic conditions, charging station availability, and car properties relevant to energy-consumption modeling. Moreover, testing and evaluating such algorithms requires realistic data and tools to simulate energy consumption and charging. This paper demonstrates a web-based testbed system for EV routing algorithms. Users can input start and end points on the map, set the car properties that influence the energy consumption, and adjust the charging station availability to see how the results change. The system visualizes a set of proposed routes as well as a number of alternative routes considered (but discarded) by the routing algorithm. Details of each leg of each route can be interactively explored. The highly configurable system allows the algorithm developers to ask what-if and why-not questions.) <|cite_end|>; despite storing a fraction of edges, these models have produced unbiased estimates with reasonably low error rates. Based on the success of these methods, our descriptors are designed to compute graph representations from edge streams, allowing us to compute features without storing the entire graph. In contrast, all existing descriptors and representation paradigms require storing the entire graph in memory.
This work is an extension of <|cite_start|> (Reference: Estimating Descriptors for Large Graphs: Embedding networks into a fixed dimensional feature space, while preserving its essential structural properties is a fundamental task in graph analytics. These feature vectors (graph descriptors) are used to measure the pairwise similarity between graphs. This enables applying data mining algorithms (e.g classification, clustering, or anomaly detection) on graph-structured data which have numerous applications in multiple domains. State-of-the-art algorithms for computing descriptors require the entire graph to be in memory, entailing a huge memory footprint, and thus do not scale well to increasing sizes of real-world networks. In this work, we propose streaming algorithms to efficiently approximate descriptors by estimating counts of sub-graphs of order $k\leq 4$, and thereby devise extensions of two existing graph comparison paradigms: the Graphlet Kernel and NetSimile. Our algorithms require a single scan over the edge stream, have space complexity that is a fraction of the input size, and approximate embeddings via a simple sampling scheme. Our design exploits the trade-off between available memory and estimation accuracy to provide a method that works well for limited memory requirements. We perform extensive experiments on real-world networks and demonstrate that our algorithms scale well to massive graphs.) <|cite_end|>, wherein we proposed descriptors based on features obtained from graph streams. These descriptors are inspired by two existing works, the \textsc{Graphlet Kernel} <|cite_start|> (Reference: Efficient graphlet kernels for large graph comparison: State-of-the-art graph kernels do not scale to large graphs with hundreds of nodes and thousands of edges. In this article we propose to compare graphs by counting graphlets, i.e., subgraphs with k nodes where k ∈ {3, 4, 5}. Exhaustive enumeration of all graphlets being prohibitively expensive, we introduce two theoretically grounded speedup schemes, one based on sampling and the second one specifically designed for bounded degree graphs. In our experimental evaluation, our novel kernels allow us to efficiently compare large graphs that cannot be tackled by existing graph kernels.) <|cite_end|>and \textsc{NetSimile} <|cite_start|> (Reference: Network similarity via multiple social theories: Given a set of k networks, possibly with different sizes and no overlaps in nodes or links, how can we quickly assess similarity between them? Analogously, are there a set of social theories which, when represented by a small number of descriptive, numerical features, effectively serve as a “signature” for the network? Having such signatures will enable a wealth of graph mining and social network analysis tasks, including clustering, outlier detection, visualization, etc. We propose a novel, effective, and scalable method, called NetSimile, for solving the above problem. Our approach has the following desirable properties: (a) It is supported by a set of social theories. (b) It gives similarity scores that are size-invariant. (c) It is scalable, being linear on the number of links for graph signature extraction. In extensive experiments on numerous synthetic and real networks from disparate domains, NetSimile outperforms baseline competitors. We also demonstrate how our approach enables several mining tasks such as clustering, visualization, discontinuity detection, network transfer learning, and re-identification across networks.) <|cite_end|>, which compute local graph statistics as features.
In this paper, we propose a new descriptor based on \textsc{NetLSD} <|cite_start|> (Reference: NetLSD: Hearing the Shape of a Graph: Comparison among graphs is ubiquitous in graph analytics. However, it is a hard task in terms of the expressiveness of the employed similarity measure and the efficiency of its computation. Ideally, graph comparison should be invariant to the order of nodes and the sizes of compared graphs, adaptive to the scale of graph patterns, and scalable. Unfortunately, these properties have not been addressed together. Graph comparisons still rely on direct approaches, graph kernels, or representation-based methods, which are all inefficient and impractical for large graph collections. In this paper, we propose the Network Laplacian Spectral Descriptor (NetLSD): the first, to our knowledge, permutation- and size-invariant, scale-adaptive, and efficiently computable graph representation method that allows for straightforward comparisons of large graphs. NetLSD extracts a compact signature that inherits the formal properties of the Laplacian spectrum, specifically its heat or wave kernel; thus, it hears the shape of a graph. Our evaluation on a variety of real-world graphs demonstrates that it outperforms previous works in both expressiveness and efficiency.) <|cite_end|>along with the proofs and experiments showcasing the said descriptor's correctness and efficacy. We perform experiments on new benchmark datasets and provide data visualization of our proposed and \textsc{NetLSD} based embeddings using $t$-SNE.
\begin{figure}[!h]
\includegraphics[width=.95\linewidth]{Figures/introfig.pdf}
\caption{This figure depicts the contrast between the typical approach for computing descriptors and our proposed approach. The descriptor in this example represents a graph by the counts of select subgraphs. Note how we tradeoff accuracy for memory consumption by keeping only a fraction of the graph in memory.}
\end{figure}
Our contributions are summarized as follows:
\begin{itemize}
\item We propose simple graph descriptors that run on edge streams.
\item We provide proofs to show how the features used in \textsc{NetSimile} <|cite_start|> (Reference: Network similarity via multiple social theories: Given a set of k networks, possibly with different sizes and no overlaps in nodes or links, how can we quickly assess similarity between them? Analogously, are there a set of social theories which, when represented by a small number of descriptive, numerical features, effectively serve as a “signature” for the network? Having such signatures will enable a wealth of graph mining and social network analysis tasks, including clustering, outlier detection, visualization, etc. We propose a novel, effective, and scalable method, called NetSimile, for solving the above problem. Our approach has the following desirable properties: (a) It is supported by a set of social theories. (b) It gives similarity scores that are size-invariant. (c) It is scalable, being linear on the number of links for graph signature extraction. In extensive experiments on numerous synthetic and real networks from disparate domains, NetSimile outperforms baseline competitors. We also demonstrate how our approach enables several mining tasks such as clustering, visualization, discontinuity detection, network transfer learning, and re-identification across networks.) <|cite_end|>and \textsc{NetLSD} <|cite_start|> (Reference: NetLSD: Hearing the Shape of a Graph: Comparison among graphs is ubiquitous in graph analytics. However, it is a hard task in terms of the expressiveness of the employed similarity measure and the efficiency of its computation. Ideally, graph comparison should be invariant to the order of nodes and the sizes of compared graphs, adaptive to the scale of graph patterns, and scalable. Unfortunately, these properties have not been addressed together. Graph comparisons still rely on direct approaches, graph kernels, or representation-based methods, which are all inefficient and impractical for large graph collections. In this paper, we propose the Network Laplacian Spectral Descriptor (NetLSD): the first, to our knowledge, permutation- and size-invariant, scale-adaptive, and efficiently computable graph representation method that allows for straightforward comparisons of large graphs. NetLSD extracts a compact signature that inherits the formal properties of the Laplacian spectrum, specifically its heat or wave kernel; thus, it hears the shape of a graph. Our evaluation on a variety of real-world graphs demonstrates that it outperforms previous works in both expressiveness and efficiency.) <|cite_end|>can be computed using subgraph counts.
\item We restrict our algorithms' time and space complexity to scale linearly (for a fixed budget) in the order and size of the graph. We provide theoretical bounds on the time and space complexity of our algorithms.
\item Empirical evaluation on benchmark graph classification datasets demonstrates that our descriptors are comparable to other state-of-the-art descriptors with respect to classification accuracy. Moreover, our descriptors can scale to graphs with millions of nodes and edges because we do not require storing the entire graph in memory.
\item We perform data visualization to show the (global) distribution of data points in the proposed and state-of-the-art (SOTA) descriptors. The visualization results show that the \textsc{santa} preserves the data distribution better \textsc{gabe} and \textsc{maeve} and is comparable to the SOTA descriptor, \textsc{NetLSD}.
\end{itemize}
The remaining paper is organized as follows. We review some of the related work in Section~\ref{sec:rw} and give a formal problem description in Section~\ref{sec:prelim}. We provide detail of our descriptors in Section~\ref{sec:sol}. Section~\ref{sec_experimental_evaluation} contains the experimental evaluation detail, including dataset statistics, preprocessing, hyperparameter values, and data visualization. In Section~\ref{sec:experiments} we report the experimental results of our method. Finally, we conclude the paper in Section~\ref{sec:conclusion}.
Related Work
\label{sec:rw}
In this section, we review some closely related work on graph analysis. We discuss some distance/similar measures between graphs that are used in downstream machine learning algorithms. We also provide an overview of the basic paradigms for graph representation learning.
\subsection{Pairwise Proximity Measure between Graphs}
A fundamental building block for analyzing large graphs is evaluating pairwise similarity/distance between graphs. The \textit{direct approach} to computing pairwise proximity considers the entire structure of both graphs. A simple and best-known distance measure between graphs is the {\em Graph Edit Distance} (\textsc{ged}) <|cite_start|> (Reference: A Distance Measure between Attributed Relational Graphs for Pattern Recognition: A method to determine a distance measure between two nonhierarchical attributed relational graphs is presented. In order to apply this distance measure, the graphs are characterised by descriptive graph grammars (DGG). The proposed distance measure is based on the computation of the minimum number of modifications required to transform an input graph into the reference one. Specifically, the distance measure is defined as the cost of recognition of nodes plus the number of transformations which include node insertion, node deletion, branch insertion, branch deletion, node label substitution and branch label substitution. The major difference between the proposed distance measure and the other ones is the consideration of the cost of recognition of nodes in the distance computation. In order to do this, the principal features of the nodes are described by one or several cost functions which are used to compute the similarity between the input nodes and the reference ones. Finally, an application of this distance measure to the recognition of lower case handwritten English characters is presented.) <|cite_end|>. \textsc{ged}, like edit distance between sequences, counts the number of insertions, deletions, and substitutions of vertices and/or edges that are needed to transform one graph to the other. Runtimes of computing \textsc{ged} between two graphs are computationally prohibitive, restricting its applicability to graphs of very small orders and sizes. Another distance measure is based on permutations of vertices of one graph such that an error norm between the adjacency matrices of two graphs is minimum. Computing this distance and even relaxation of this distance is computationally expensive <|cite_start|> (Reference: Graph Isomorphism in Quasipolynomial Time: We show that the Graph Isomorphism (GI) problem and the related problems of String Isomorphism (under group action) (SI) and Coset Intersection (CI) can be solved in quasipolynomial ($\exp((\log n)^{O(1)})$) time. The best previous bound for GI was $\exp(O(\sqrt{n\log n}))$, where $n$ is the number of vertices (Luks, 1983); for the other two problems, the bound was similar, $\exp(\tilde{O}(\sqrt{n}))$, where $n$ is the size of the permutation domain (Babai, 1983). The algorithm builds on Luks's SI framework and attacks the barrier configurations for Luks's algorithm by group theoretic "local certificates" and combinatorial canonical partitioning techniques. We show that in a well-defined sense, Johnson graphs are the only obstructions to effective canonical partitioning. Luks's barrier situation is characterized by a homomorphism {\phi} that maps a given permutation group $G$ onto $S_k$ or $A_k$, the symmetric or alternating group of degree $k$, where $k$ is not too small. We say that an element $x$ in the permutation domain on which $G$ acts is affected by {\phi} if the {\phi}-image of the stabilizer of $x$ does not contain $A_k$. The affected/unaffected dichotomy underlies the core "local certificates" routine and is the central divide-and-conquer tool of the algorithm.) <|cite_end|> <|cite_start|> (Reference: A Family of Tractable Graph Distances: Important data mining problems such as nearest-neighbor search and clustering admit theoretical guarantees when restricted to objects embedded in a metric space. Graphs are ubiquitous, and clustering and classification over graphs arise in diverse areas, including, e.g., image processing and social networks. Unfortunately, popular distance scores used in these applications, that scale over large graphs, are not metrics and thus come with no guarantees. Classic graph distances such as, e.g., the chemical and the CKS distance are arguably natural and intuitive, and are indeed also metrics, but they are intractable: as such, their computation does not scale to large graphs. We define a broad family of graph distances, that includes both the chemical and the CKS distance, and prove that these are all metrics. Crucially, we show that our family includes metrics that are tractable. Moreover, we extend these distances by incorporating auxiliary node attributes, which is important in practice, while maintaining both the metric property and tractability.) <|cite_end|>. When there is a valid bijection between vertices of the two graphs, then a similar measure, \textsc{DeltaCon}, yields excellent results. However, requiring a valid bijection limits the applicability of \textsc{DeltaCon} only to a collection of graphs on the same vertex set.
\vskip.05in
The representation learning approach for graph analysis maps graphs into a vector space. Vector space machine learning algorithms are employed using a pairwise distance measure between the vector representations of graphs. We discuss three broad approaches in this vein.
\subsection{Kernel-Based Machine Learning Methods}
The \textit{kernel-based} machine learning methods represent each non-vector data item to a high dimensional vector. The feature vectors are based on counts (spectra) of all possible sub-structures of some fixed magnitude in the data item. A kernel function is then defined, usually as the dot-product of the pair of feature vectors. The pairwise kernel values between objects constitute a positive semi-definite matrix and serve as a similarity measure in the machine learning algorithm (e.g., SVM and kernel PCA). Explicit construction of feature vectors is computationally costly due to their large dimensionality. Therefore, in the so-called {\em kernel trick}, kernel values are directly evaluated based on objects. Kernel methods have yielded great successes for a variety of data such as images and sequences <|cite_start|> (Reference: Kernel Descriptors for Visual Recognition: The design of low-level image features is critical for computer vision algorithms. Orientation histograms, such as those in SIFT [16] and HOG [3], are the most successful and popular features for visual object and scene recognition. We highlight the kernel view of orientation histograms, and show that they are equivalent to a certain type of match kernels over image patches. This novel view allows us to design a family of kernel descriptors which provide a unified and principled framework to turn pixel attributes (gradient, color, local binary pattern, etc.) into compact patch-level features. In particular, we introduce three types of match kernels to measure similarities between image patches, and construct compact low-dimensional kernel descriptors from these match kernels using kernel principal component analysis (KPCA) [23]. Kernel descriptors are easy to design and can turn any type of pixel attribute into patch-level features. They outperform carefully tuned and sophisticated features including SIFT and deep belief networks. We report superior performance on standard image classification benchmarks: Scene-15, Caltech-101, CIFAR10 and CIFAR10-ImageNet.) <|cite_end|> <|cite_start|> (Reference: Generalized similarity kernels for efficient sequence classification: String kernel-based machine learning methods have yielded great success in practical tasks of structured/sequential data analysis. In this paper we propose a novel computational framework that uses general similarity metrics and distance-preserving embeddings with string kernels to improve sequence classification. An embedding step, a distance-preserving bitstring mapping, is used to effectively capture similarity between otherwise symbolically different sequence elements. We show that it is possible to retain computational efficiency of string kernels while using this more “precise” measure of similarity. We then demonstrate that on a number of sequence classification tasks such as music, and biological sequence classification, the new method can substantially improve upon state-of-the-art string kernel baselines.) <|cite_end|> <|cite_start|> (Reference: Efficient approximation algorithms for strings kernel based sequence classification: Sequence classification algorithms, such as SVM, require a definition of distance (similarity) measure between two sequences. A commonly used notion of similarity is the number of matches between $k$-mers ($k$-length subsequences) in the two sequences. Extending this definition, by considering two $k$-mers to match if their distance is at most $m$, yields better classification performance. This, however, makes the problem computationally much more complex. Known algorithms to compute this similarity have computational complexity that render them applicable only for small values of $k$ and $m$. In this work, we develop novel techniques to efficiently and accurately estimate the pairwise similarity score, which enables us to use much larger values of $k$ and $m$, and get higher predictive accuracy. This opens up a broad avenue of applying this classification approach to audio, images, and text sequences. Our algorithm achieves excellent approximation performance with theoretical guarantees. In the process we solve an open combinatorial problem, which was posed as a major hindrance to the scalability of existing solutions. We give analytical bounds on quality and runtime of our algorithm and report its empirical performance on real world biological and music sequences datasets.) <|cite_end|>. The most prominent graph kernels are the shortest-Path <|cite_start|> (Reference: {Shortest-Path Kernels on Graphs: Data mining algorithms are facing the challenge to deal with an increasing number of complex objects. For graph data, a whole toolbox of data mining algorithms becomes available by defining a kernel function on instances of graphs. Graph kernels based on walks, subtrees and cycles in graphs have been proposed so far. As a general problem, these kernels are either computationally expensive or limited in their expressiveness. We try to overcome this problem by defining expressive graph kernels which are based on paths. As the computation of all paths and longest paths in a graph is NP-hard, we propose graph kernels based on shortest paths. These kernels are computable in polynomial time, retain expressivity and are still positive definite. In experiments on classification of graph models of proteins, our shortest-path kernels show significantly higher classification accuracy than walk-based kernels.) <|cite_end|>, Graphlet <|cite_start|> (Reference: Efficient graphlet kernels for large graph comparison: State-of-the-art graph kernels do not scale to large graphs with hundreds of nodes and thousands of edges. In this article we propose to compare graphs by counting graphlets, i.e., subgraphs with k nodes where k ∈ {3, 4, 5}. Exhaustive enumeration of all graphlets being prohibitively expensive, we introduce two theoretically grounded speedup schemes, one based on sampling and the second one specifically designed for bounded degree graphs. In our experimental evaluation, our novel kernels allow us to efficiently compare large graphs that cannot be tackled by existing graph kernels.) <|cite_end|>, the Weisfeller-Lehman <|cite_start|> (Reference: Weisfeiler-Lehman graph kernels: In this article, we propose a family of efficient kernels for large graphs with discrete node labels. Key to our method is a rapid feature extraction scheme based on the Weisfeiler-Lehman test of isomorphism on graphs. It maps the original graph to a sequence of graphs, whose node attributes capture topological and label information. A family of kernels can be defined based on this Weisfeiler-Lehman sequence of graphs, including a highly efficient kernel comparing subtree-like patterns. Its runtime scales only linearly in the number of edges of the graphs and the length of the Weisfeiler-Lehman graph sequence. In our experimental evaluation, our kernels outperform state-of-the-art graph kernels on several graph classification benchmark data sets in terms of accuracy and runtime. Our kernels open the door to large-scale applications of graph kernels in various disciplines such as computational biology and social network analysis.) <|cite_end|>, and the hierarchical <|cite_start|> (Reference: The multiscale laplacian graph kernel: Many real world graphs, such as the graphs of molecules, exhibit structure at multiple different scales, but most existing kernels between graphs are either purely local or purely global in character. In contrast, by building a hierarchy of nested subgraphs, the Multiscale Laplacian Graph kernels (MLG kernels) that we define in this paper can account for structure at a range of different scales. At the heart of the MLG construction is another new graph kernel, called the Feature Space Laplacian Graph kernel (FLG kernel), which has the property that it can lift a base kernel defined on the vertices of two graphs to a kernel between the graphs. The MLG kernel applies such FLG kernels to subgraphs recursively. To make the MLG kernel computationally feasible, we also introduce a randomized projection procedure, similar to the Nystro m method, but for RKHS operators.) <|cite_end|>kernels. The computational and space complexity of the kernel matrix make kernel-based methods infeasible for large datasets of massive graphs.
\subsection{Deep Learning Based Methods}
The deep learning approach to representation learning is to train a \textit{neural network} for embedding objects into Euclidean space. The goal here is to map `similar' objects to `close-by' points in $\mathbb{R}^d$. Deep learning-based methods and domain-specific techniques have been successfully used for embedding nodes in networks <|cite_start|> (Reference: Learning Graph Representations with Embedding Propagation: We propose Embedding Propagation (EP), an unsupervised learning framework for graph-structured data. EP learns vector representations of graphs by passing two types of messages between neighboring nodes. Forward messages consist of label representations such as representations of words and other attributes associated with the nodes. Backward messages consist of gradients that result from aggregating the label representations and applying a reconstruction loss. Node representations are finally computed from the representation of their labels. With significantly fewer parameters and hyperparameters an instance of EP is competitive with and often outperforms state of the art unsupervised and semi-supervised learning methods on a range of benchmark data sets.) <|cite_end|> <|cite_start|> (Reference: node2vec: Scalable Feature Learning for Networks: Prediction tasks over nodes and edges in networks require careful effort in engineering features used by learning algorithms. Recent research in the broader field of representation learning has led to significant progress in automating prediction by learning the features themselves. However, present feature learning approaches are not expressive enough to capture the diversity of connectivity patterns observed in networks. Here we propose node2vec, an algorithmic framework for learning continuous feature representations for nodes in networks. In node2vec, we learn a mapping of nodes to a low-dimensional space of features that maximizes the likelihood of preserving network neighborhoods of nodes. We define a flexible notion of a node's network neighborhood and design a biased random walk procedure, which efficiently explores diverse neighborhoods. Our algorithm generalizes prior work which is based on rigid notions of network neighborhoods, and we argue that the added flexibility in exploring neighborhoods is the key to learning richer representations. We demonstrate the efficacy of node2vec over existing state-of-the-art techniques on multi-label classification and link prediction in several real-world networks from diverse domains. Taken together, our work represents a new way for efficiently learning state-of-the-art task-independent representations in complex networks.) <|cite_end|> <|cite_start|> (Reference: {GraRep: Learning Graph Representations with Global Structural Information: In this paper, we present {GraRep}, a novel model for learning vertex representations of weighted graphs. This model learns low dimensional vectors to represent vertices appearing in a graph and, unlike existing work, integrates global structural information of the graph into the learning process. We also formally analyze the connections between our work and several previous research efforts, including the DeepWalk model of Perozzi et al. as well as the skip-gram model with negative sampling of Mikolov et al. We conduct experiments on a language network, a social network as well as a citation network and show that our learned global representations can be effectively used as features in tasks such as clustering, classification and visualization. Empirical results demonstrate that our representation significantly outperforms other state-of-the-art methods in such tasks.) <|cite_end|>and graphs <|cite_start|> (Reference: How Powerful are Graph Neural Networks?: Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.) <|cite_end|> <|cite_start|> (Reference: Weisfeiler and Leman Go Neural: Higher-order Graph Neural Networks: In recent years, graph neural networks (GNNs) have emerged as a powerful neural architecture to learn vector representations of nodes and graphs in a supervised, end-to-end fashion. Up to now, GNNs have only been evaluated empirically -- showing promising results. The following work investigates GNNs from a theoretical point of view and relates them to the $1$-dimensional Weisfeiler-Leman graph isomorphism heuristic ($1$-WL). We show that GNNs have the same expressiveness as the $1$-WL in terms of distinguishing non-isomorphic (sub-)graphs. Hence, both algorithms also have the same shortcomings. Based on this, we propose a generalization of GNNs, so-called $k$-dimensional GNNs ($k$-GNNs), which can take higher-order graph structures at multiple scales into account. These higher-order structures play an essential role in the characterization of social networks and molecule graphs. Our experimental evaluation confirms our theoretical findings as well as confirms that higher-order information is useful in the task of graph classification and regression.) <|cite_end|> <|cite_start|> (Reference: JANE: Jointly Adversarial Network Embedding: Motivated by the capability of Generative Adversarial Network on exploring the latent semantic space and capturing semantic variations in the data distribution, adversarial learning has been adopted in network embedding to improve the robustness. However, this important ability is lost in existing adversarially regularized network embedding methods, because their embedding results are directly compared to the samples drawn from perturbation (Gaussian) distribution without any rectification from real data. To overcome this vital issue, a novel Joint Adversarial Network Embedding (JANE) framework is proposed to jointly distinguish the real and fake combinations of the embeddings, topology information and node features. JANE contains three pluggable components, Embedding module, Generator module and Discriminator module.
The overall objective function of JANE is defined in a min-max form, which can be optimized via alternating stochastic gradient. Extensive experiments demonstrate the remarkable superiority of the proposed JANE on link prediction (3% gains in both AUC and AP) and node clustering (5% gain in F1 score).) <|cite_end|> <|cite_start|> (Reference: 3-in-1 correlated embedding via adaptive exploration of the structure and semantic subspaces: Combinational network embedding, which learns the node representation by exploring both topological and non-topological information, becomes popular due to the fact that the two types of information are complementing each other. Most of the existing methods either consider the topological and non-topological information being aligned or possess predetermined preferences during the embedding process.Unfortunately, previous methods fail to either explicitly describe the correlations between topological and non-topological information or adaptively weight their impacts. To address the existing issues, three new assumptions are proposed to better describe the embedding space and its properties. With the proposed assumptions, nodes, communities and topics are mapped into one embedding space. A novel generative model is proposed to formulate the generation process of the network and content from the embeddings, with respect to the Bayesian framework. The proposed model automatically leans to the information which is more discriminative.The embedding result can be obtained by maximizing the posterior distribution by adopting the variational inference and reparameterization trick. Experimental results indicate that the proposed method gives superior performances compared to the state-of-the-art methods when a variety of real-world networks is analyzed.) <|cite_end|>. Vector-space-based machine learning methods are then employed on these embeddings for data analysis. However, these approaches are data-hungry and computationally prohibitive <|cite_start|> (Reference: A Multi-cascaded Model with Data Augmentation for Enhanced Paraphrase Detection in Short Texts: Paraphrase detection is an important task in text analytics with numerous applications such as plagiarism detection, duplicate question identification, and enhanced customer support helpdesks. Deep models have been proposed for representing and classifying paraphrases. These models, however, require large quantities of human-labeled data, which is expensive to obtain. In this work, we present a data augmentation strategy and a multi-cascaded model for improved paraphrase detection in short texts. Our data augmentation strategy considers the notions of paraphrases and non-paraphrases as binary relations over the set of texts. Subsequently, it uses graph theoretic concepts to efficiently generate additional paraphrase and non-paraphrase pairs in a sound manner. Our multi-cascaded model employs three supervised feature learners (cascades) based on CNN and LSTM networks with and without soft-attention. The learned features, together with hand-crafted linguistic features, are then forwarded to a discriminator network for final classification. Our model is both wide and deep and provides greater robustness across clean and noisy short texts. We evaluate our approach on three benchmark datasets and show that it produces a comparable or state-of-the-art performance on all three.) <|cite_end|>, hindering their scalability to graphs of large orders and sizes.
\subsection{Descriptor Computation Methods}
The \textit{descriptor} learning paradigm differs from kernel methods in that the dimensionality of the feature vectors is much smaller than the kernel-based features. Unlike neural network-based models, the features are explainable and hand-picked using domain-specific knowledge <|cite_start|> (Reference: Network similarity via multiple social theories: Given a set of k networks, possibly with different sizes and no overlaps in nodes or links, how can we quickly assess similarity between them? Analogously, are there a set of social theories which, when represented by a small number of descriptive, numerical features, effectively serve as a “signature” for the network? Having such signatures will enable a wealth of graph mining and social network analysis tasks, including clustering, outlier detection, visualization, etc. We propose a novel, effective, and scalable method, called NetSimile, for solving the above problem. Our approach has the following desirable properties: (a) It is supported by a set of social theories. (b) It gives similarity scores that are size-invariant. (c) It is scalable, being linear on the number of links for graph signature extraction. In extensive experiments on numerous synthetic and real networks from disparate domains, NetSimile outperforms baseline competitors. We also demonstrate how our approach enables several mining tasks such as clustering, visualization, discontinuity detection, network transfer learning, and re-identification across networks.) <|cite_end|> <|cite_start|> (Reference: Predicting Attributes of Nodes Using Network Structure: In many graphs such as social networks, nodes have associated attributes representing their behavior. Predicting node attributes in such graphs is an important problem with applications in many domains like recommendation systems, privacy preservation, and targeted advertisement. Attributes values can be predicted by analyzing patterns and correlations among attributes and employing classification/regression algorithms. However, these approaches do not utilize readily available network topology information. In this regard, interconnections between different attributes of nodes can be exploited to improve the prediction accuracy. In this paper, we propose an approach to represent a node by a feature map with respect to an attribute $a_i$ (which is used as input for machine learning algorithms) using all attributes of neighbors to predict attributes values for $a_i$. We perform extensive experimentation on ten real-world datasets and show that the proposed feature map significantly improves the prediction accuracy as compared to baseline approaches on these datasets.) <|cite_end|>.
One such graph descriptor, \textsc{NetSimile} <|cite_start|> (Reference: Network similarity via multiple social theories: Given a set of k networks, possibly with different sizes and no overlaps in nodes or links, how can we quickly assess similarity between them? Analogously, are there a set of social theories which, when represented by a small number of descriptive, numerical features, effectively serve as a “signature” for the network? Having such signatures will enable a wealth of graph mining and social network analysis tasks, including clustering, outlier detection, visualization, etc. We propose a novel, effective, and scalable method, called NetSimile, for solving the above problem. Our approach has the following desirable properties: (a) It is supported by a set of social theories. (b) It gives similarity scores that are size-invariant. (c) It is scalable, being linear on the number of links for graph signature extraction. In extensive experiments on numerous synthetic and real networks from disparate domains, NetSimile outperforms baseline competitors. We also demonstrate how our approach enables several mining tasks such as clustering, visualization, discontinuity detection, network transfer learning, and re-identification across networks.) <|cite_end|>, represents a graph by a vector of aggregates of various vertex-level features. It considers seven features for each vertex, such as degree, clustering coefficient, and parameters of vertices' neighbors and their ``ego-networks,'' and applies the aggregator functions, such as median, mean, standard deviation, skewness, and kurtosis, across each feature. Stochastic Graphlet Embedding <|cite_start|> (Reference: Stochastic Graphlet Embedding: Graph-based methods are known to be successful in many machine learning and pattern classification tasks. These methods consider semi-structured data as graphs where nodes correspond to primitives (parts, interest points, segments, etc.) and edges characterize the relationships between these primitives. However, these non-vectorial graph data cannot be straightforwardly plugged into off-the-shelf machine learning algorithms without a preliminary step of -- explicit/implicit -- graph vectorization and embedding. This embedding process should be resilient to intra-class graph variations while being highly discriminant. In this paper, we propose a novel high-order stochastic graphlet embedding (SGE) that maps graphs into vector spaces. Our main contribution includes a new stochastic search procedure that efficiently parses a given graph and extracts/samples unlimitedly high-order graphlets. We consider these graphlets, with increasing orders, to model local primitives as well as their increasingly complex interactions. In order to build our graph representation, we measure the distribution of these graphlets into a given graph, using particular hash functions that efficiently assign sampled graphlets into isomorphic sets with a very low probability of collision. When combined with maximum margin classifiers, these graphlet-based representations have positive impact on the performance of pattern comparison and recognition as corroborated through extensive experiments using standard benchmark databases.) <|cite_end|>proposes a graph descriptor based on random walks over graphs to extract graphlets (sub-structures) of increasing order. Similar to this sub-structural approach is the Higher Order Structure Descriptor <|cite_start|> (Reference: Interpretable multi-scale graph descriptors via structural compression: ) <|cite_end|>, which iteratively compresses graphlets within a graph to generate ``higher-order'' graphs and constructs histograms of the graphlet counts in each graph. More recently, \textsc{feather} was introduced as a descriptor that computes node-level feature vectors using a complex characteristic function and aggregates these to construct graph embeddings <|cite_start|> (Reference: Characteristic Functions on Graphs: Birds of a Feather, from Statistical Descriptors to Parametric Models: In this paper, we propose a flexible notion of characteristic functions defined on graph vertices to describe the distribution of vertex features at multiple scales. We introduce FEATHER, a computationally efficient algorithm to calculate a specific variant of these characteristic functions where the probability weights of the characteristic function are defined as the transition probabilities of random walks. We argue that features extracted by this procedure are useful for node level machine learning tasks. We discuss the pooling of these node representations, resulting in compact descriptors of graphs that can serve as features for graph classification algorithms. We analytically prove that FEATHER describes isomorphic graphs with the same representation and exhibits robustness to data corruption. Using the node feature characteristic functions we define parametric models where evaluation points of the functions are learned parameters of supervised classifiers. Experiments on real world large datasets show that our proposed algorithm creates high quality representations, performs transfer learning efficiently, exhibits robustness to hyperparameter changes, and scales linearly with the input size.) <|cite_end|>. There has been a trend towards using graph spectra <|cite_start|> (Reference: Combinatorial Trace Method for Network Immunization: Immunizing a subset of nodes in a network - enabling them to identify and withstand the spread of harmful content - is one of the most effective ways to counter the spread of malicious content. It has applications in network security, public health policy, and social media surveillance. Finding a subset of nodes whose immunization results in the least vulnerability of the network is a computationally challenging task. In this work, we establish a relationship between a widely used network vulnerability measure and the combinatorial properties of networks. Using this relationship and graph summarization techniques, we propose an efficient approximation algorithm to find a set of nodes to immunize. We provide theoretical justifications for the proposed solution and analytical bounds on the runtime of our algorithm. We empirically demonstrate on various real-world networks that the performance of our algorithm is an order of magnitude better than the state of the art solution. We also show that in practice the runtime of our algorithm is significantly lower than that of the best-known solution.) <|cite_end|> <|cite_start|> (Reference: Spectral Methods for Immunization of Large Networks: Given a network of nodes, minimizing the spread of a contagion using a limited budget is a well-studied problem with applications in network security, viral marketing, social networks, and public health. In real graphs, virus may infect a node which in turn infects its neighbor nodes and this may trigger an epidemic in the whole graph. The goal thus is to select the best k nodes (budget constraint) that are immunized (vaccinated, screened, filtered) so as the remaining graph is less prone to the epidemic. It is known that the problem is, in all practical models, computationally intractable even for moderate sized graphs. In this paper we employ ideas from spectral graph theory to define relevance and importance of nodes. Using novel graph theoretic techniques, we then design an efficient approximation algorithm to immunize the graph. Theoretical guarantees on the running time of our algorithm show that it is more efficient than any other known solution in the literature. We test the performance of our algorithm on several real world graphs. Experiments show that our algorithm scales well for large graphs and outperforms state of the art algorithms both in quality (containment of epidemic) and efficiency (runtime and space complexity).) <|cite_end|> <|cite_start|> (Reference: Scalable Approximation Algorithm for Network Immunization: The problem of identifying important players in a given network is of pivotal importance for viral marketing, public health management, network security and various other fields of social network analysis. In this work we find the most important vertices in a graph G = (V,E) to immunize so as the chances of an epidemic outbreak is minimized. This problem is directly relevant to minimizing the impact of a contagion spread (e.g. flu virus, computer virus and rumor) in a graph (e.g. social network, computer network) with a limited budget (e.g. the number of available vaccines, antivirus software, filters). It is well known that this problem is computationally intractable (it is NP-hard). In this work we reformulate the problem as a budgeted combinational optimization problem and use techniques from spectral graph theory to design an efficient greedy algorithm to find a subset of vertices to be immunized. We show that our algorithm takes less time compared to the state of the art algorithm. Thus our algorithm is scalable to networks of much larger sizes than best known solutions proposed earlier. We also give analytical bounds on the quality of our algorithm. Furthermore, we evaluate the efficacy of our algorithm on a number of real world networks and demonstrate that the empirical performance of algorithm supplements the theoretical bounds we present, both in terms of approximation guarantees and computational efficiency.) <|cite_end|>to learn descriptors <|cite_start|> (Reference: Hunt for the unique, stable, sparse and fast feature learning on graphs: For the purpose of learning on graphs, we hunt for a graph feature representation that exhibit certain uniqueness, stability and sparsity properties while also being amenable to fast computation. This leads to the discovery of family of graph spectral distances (denoted as FGSD) and their based graph feature representations, which we prove to possess most of these desired properties. To both evaluate the quality of graph features produced by FGSD and demonstrate their utility, we apply them to the graph classification problem. Through extensive experiments, we show that a simple SVM based classification algorithm, driven with our powerful FGSD based graph features, significantly outperforms all the more sophisticated state-of-art algorithms on the unlabeled node datasets in terms of both accuracy and speed; it also yields very competitive results on the labeled datasets - despite the fact it does not utilize any node label information.) <|cite_end|> <|cite_start|> (Reference: NetLSD: Hearing the Shape of a Graph: Comparison among graphs is ubiquitous in graph analytics. However, it is a hard task in terms of the expressiveness of the employed similarity measure and the efficiency of its computation. Ideally, graph comparison should be invariant to the order of nodes and the sizes of compared graphs, adaptive to the scale of graph patterns, and scalable. Unfortunately, these properties have not been addressed together. Graph comparisons still rely on direct approaches, graph kernels, or representation-based methods, which are all inefficient and impractical for large graph collections. In this paper, we propose the Network Laplacian Spectral Descriptor (NetLSD): the first, to our knowledge, permutation- and size-invariant, scale-adaptive, and efficiently computable graph representation method that allows for straightforward comparisons of large graphs. NetLSD extracts a compact signature that inherits the formal properties of the Laplacian spectrum, specifically its heat or wave kernel; thus, it hears the shape of a graph. Our evaluation on a variety of real-world graphs demonstrates that it outperforms previous works in both expressiveness and efficiency.) <|cite_end|>.
These descriptors are relatively computationally expensive but have excellent classification performance.
An exact method, Von Neumann Graph Entropy (VNGE) is proposed in <|cite_start|> (Reference: The Laplacian of a Graph as a Density Matrix: A Basic Combinatorial Approach to Separability of Mixed States: ) <|cite_end|> <|cite_start|> (Reference: Fast Incremental von Neumann Graph Entropy Computation: Theory, Algorithm, and Applications: The von Neumann graph entropy (VNGE) facilitates measurement of information divergence and distance between graphs in a graph sequence. It has been successfully applied to various learning tasks driven by network-based data. While effective, VNGE is computationally demanding as it requires the full eigenspectrum of the graph Laplacian matrix. In this paper, we propose a new computational framework, Fast Incremental von Neumann Graph EntRopy (FINGER), which approaches VNGE with a performance guarantee. FINGER reduces the cubic complexity of VNGE to linear complexity in the number of nodes and edges, and thus enables online computation based on incremental graph changes. We also show asymptotic equivalence of FINGER to the exact VNGE, and derive its approximation error bounds. Based on FINGER, we propose efficient algorithms for computing Jensen-Shannon distance between graphs. Our experimental results on different random graph models demonstrate the computational efficiency and the asymptotic equivalence of FINGER. In addition, we apply FINGER to two real-world applications and one synthesized anomaly detection dataset, and corroborate its superior performance over seven baseline graph similarity methods.) <|cite_end|>for graph comparison. Being an exact method, VNGE does not scale to large graphs. An approximate solution of NetLSD and VNGE, called SLaQ <|cite_start|> (Reference: Just SLaQ When You Approximate: Accurate Spectral Distances for Web-Scale Graphs: Graph comparison is a fundamental operation in data mining and information retrieval. Due to the combinatorial nature of graphs, it is hard to balance the expressiveness of the similarity measure and its scalability. Spectral analysis provides quintessential tools for studying the multi-scale structure of graphs and is a well-suited foundation for reasoning about differences between graphs. However, computing full spectrum of large graphs is computationally prohibitive; thus, spectral graph comparison methods often rely on rough approximation techniques with weak error guarantees. In this work, we propose SLaQ, an efficient and effective approximation technique for computing spectral distances between graphs with billions of nodes and edges. We derive the corresponding error bounds and demonstrate that accurate computation is possible in time linear in the number of graph edges. In a thorough experimental evaluation, we show that SLaQ outperforms existing methods, oftentimes by several orders of magnitude in approximation accuracy, and maintains comparable performance, allowing to compare million-scale graphs in a matter of minutes on a single machine.) <|cite_end|>, computes spectral distances between graphs with multi-billion nodes and edges. Although computationally efficient, SLaQ keeps the entire graph in the memory during the processing, making it costly in terms of space efficiency.
Most of the above approaches require multiple passes over the entire input graph. The resulting space complexity renders them applicable only to graphs of small orders and sizes. On the other hand, real-world graphs are dynamic and enormous in their magnitude. Algorithms that perform a single pass over the input stream and have low memory requirements <|cite_start|> (Reference: 2019 International Conference on Advances in the Emerging Computing Technologies (AECT): ) <|cite_end|>are best suited for modern-day graphs. An algorithm that computes the output with provable approximation guarantees is sufficient for the single-pass and sub-linear memory requirements. There have been few recent algorithms for counting specific substructures in a streamed graph owing to the inherent difficulty of the streaming model. These include approximately computing the number of triangles <|cite_start|> (Reference: E-TRI: E-Vehicle Testbed Routing Infrastructure: Routing long trips of electric vehicles (EVs) is in growing demand and a non-trivial task as charging stops have to be planned along the way. Developing and testing realistic EV routing algorithms is challenging as multiple factors have to be considered such as traffic conditions, charging station availability, and car properties relevant to energy-consumption modeling. Moreover, testing and evaluating such algorithms requires realistic data and tools to simulate energy consumption and charging. This paper demonstrates a web-based testbed system for EV routing algorithms. Users can input start and end points on the map, set the car properties that influence the energy consumption, and adjust the charging station availability to see how the results change. The system visualizes a set of proposed routes as well as a number of alternative routes considered (but discarded) by the routing algorithm. Details of each leg of each route can be interactively explored. The highly configurable system allows the algorithm developers to ask what-if and why-not questions.) <|cite_end|>in graphs, induced subgraphs of order three and four | [
"<|reference_start|> Efficient graphlet kernels for large graph comparison: State-of-the-art graph kernels do not scale to large graphs with hundreds of nodes and thousands of edges. In this article we propose to compare graphs by counting graphlets, i.e., subgraphs with k nodes where k ∈ {3, 4, 5}. Exhaustive enumeration of all graphlets being prohibitively expensive, we introduce two theoretically grounded speedup schemes, one based on sampling and the second one specifically designed for bounded degree graphs. In our experimental evaluation, our novel kernels allow us to efficiently compare large graphs that cannot be tackled by existing graph kernels. <|reference_end|>",
"<|reference_start|> NetLSD: Hearing the Shape of a Graph: Comparison among graphs is ubiquitous in graph analytics. However, it is a hard task in terms of the expressiveness of the employed similarity measure and the efficiency of its computation. Ideally, graph comparison should be invariant to the order of nodes and the sizes of compared graphs, adaptive to the scale of graph patterns, and scalable. Unfortunately, these properties have not been addressed together. Graph comparisons still rely on direct approaches, graph kernels, or representation-based methods, which are all inefficient and impractical for large graph collections. In this paper, we propose the Network Laplacian Spectral Descriptor (NetLSD): the first, to our knowledge, permutation- and size-invariant, scale-adaptive, and efficiently computable graph representation method that allows for straightforward comparisons of large graphs. NetLSD extracts a compact signature that inherits the formal properties of the Laplacian spectrum, specifically its heat or wave kernel; thus, it hears the shape of a graph. Our evaluation on a variety of real-world graphs demonstrates that it outperforms previous works in both expressiveness and efficiency. <|reference_end|>",
"<|reference_start|> {Shortest-Path Kernels on Graphs: Data mining algorithms are facing the challenge to deal with an increasing number of complex objects. For graph data, a whole toolbox of data mining algorithms becomes available by defining a kernel function on instances of graphs. Graph kernels based on walks, subtrees and cycles in graphs have been proposed so far. As a general problem, these kernels are either computationally expensive or limited in their expressiveness. We try to overcome this problem by defining expressive graph kernels which are based on paths. As the computation of all paths and longest paths in a graph is NP-hard, we propose graph kernels based on shortest paths. These kernels are computable in polynomial time, retain expressivity and are still positive definite. In experiments on classification of graph models of proteins, our shortest-path kernels show significantly higher classification accuracy than walk-based kernels. <|reference_end|>",
"<|reference_start|> Network similarity via multiple social theories: Given a set of k networks, possibly with different sizes and no overlaps in nodes or links, how can we quickly assess similarity between them? Analogously, are there a set of social theories which, when represented by a small number of descriptive, numerical features, effectively serve as a “signature” for the network? Having such signatures will enable a wealth of graph mining and social network analysis tasks, including clustering, outlier detection, visualization, etc. We propose a novel, effective, and scalable method, called NetSimile, for solving the above problem. Our approach has the following desirable properties: (a) It is supported by a set of social theories. (b) It gives similarity scores that are size-invariant. (c) It is scalable, being linear on the number of links for graph signature extraction. In extensive experiments on numerous synthetic and real networks from disparate domains, NetSimile outperforms baseline competitors. We also demonstrate how our approach enables several mining tasks such as clustering, visualization, discontinuity detection, network transfer learning, and re-identification across networks. <|reference_end|>"
] | [
25,
29,
36,
50
] | {"<|cite_1|>": "ss-1597250", "<|cite_2|>": "ss-1338052", "<|cite_3|>": "ss-836674", "<|cite_4|>": "ss-1536657", "<|multi_cite_5_1|>": "arxiv-175090", "<|multi_cite_5_2|>": "arxiv-174692", "<|multi_cite_5_3|>": "ss-870990", "<|multi_cite_6_1|>": "arxiv-115618", "<|multi_cite_6_2|>": "ss-1528671", "<|cite_7|>": "ss-1102084", "<|multi_cite_8_1|>": "ss-683051", "<|multi_cite_8_2|>": "ss-1283923", "<|multi_cite_8_3|>": "ss-1531091", "<|multi_cite_8_4|>": "ss-1536658", "<|multi_cite_8_5|>": "ss-1536659", "<|multi_cite_8_6|>": "ss-1536660", "<|multi_cite_9_1|>": "arxiv-160215", "<|multi_cite_9_2|>": "ss-1528671", "<|cite_11|>": "arxiv-38192", "<|cite_12|>": "ss-1536661", "<|multi_cite_13_1|>": "arxiv-317823", "<|multi_cite_13_2|>": "ss-1536662", "<|multi_cite_14_1|>": "ss-1536663", "<|multi_cite_14_2|>": "ss-1094726", "<|cite_15|>": "arxiv-245251", "<|cite_16|>": "ss-1283923", "<|cite_17|>": "ss-836674", "<|cite_18|>": "arxiv-160215", "<|cite_19|>": "ss-836674", "<|cite_20|>": "arxiv-160215", "<|cite_21|>": "ss-1536657", "<|multi_cite_22_1|>": "arxiv-88905", "<|multi_cite_22_2|>": "arxiv-145338", "<|multi_cite_24_1|>": "ss-1536664", "<|multi_cite_24_2|>": "ss-975990", "<|multi_cite_24_3|>": "ss-858124", "<|cite_25|>": "ss-737310", "<|cite_26|>": "ss-1283923", "<|cite_27|>": "ss-683051", "<|cite_28|>": "ss-1102084", "<|multi_cite_29_1|>": "arxiv-136789", "<|multi_cite_29_2|>": "arxiv-101396", "<|multi_cite_29_3|>": "ss-1231112", "<|multi_cite_30_1|>": "arxiv-174692", "<|multi_cite_30_2|>": "arxiv-175090", "<|multi_cite_30_3|>": "ss-1536665", "<|multi_cite_30_4|>": "ss-1536666", "<|cite_31|>": "arxiv-241090", "<|multi_cite_32_1|>": "ss-836674", "<|multi_cite_32_2|>": "arxiv-241170", "<|cite_33|>": "ss-836674", "<|cite_34|>": "arxiv-115618", "<|cite_35|>": "ss-1531091", "<|cite_36|>": "arxiv-265861", "<|multi_cite_37_1|>": "arxiv-241091", "<|multi_cite_37_2|>": "arxiv-138981", "<|multi_cite_37_3|>": "arxiv-138979", "<|multi_cite_38_1|>": "ss-1528671", "<|multi_cite_38_2|>": "arxiv-160215", "<|multi_cite_39_1|>": "ss-1195242", "<|multi_cite_39_2|>": "arxiv-160544", "<|cite_40|>": "arxiv-251658", "<|cite_41|>": "ss-1536667", "<|cite_42|>": "ss-1094726", "<|cite_43|>": "ss-1536661", "<|cite_44|>": "ss-1536662"} |
2207.14663 | <|paper_start|> Title: Going Off-Grid: Continuous Implicit Neural Representations for 3D Vascular Modeling
Abstract: Going Off-Grid: Continuous Implicit Neural Representations for 3D Vascular Modeling: Personalised 3D vascular models are valuable for diagnosis, prognosis and treatment planning in patients with cardiovascular disease. Traditionally, such models have been constructed with explicit representations such as meshes and voxel masks, or implicit representations such as radial basis functions or atomic (tubular) shapes. Here, we propose to represent surfaces by the zero level set of their signed distance function (SDF) in a differentiable implicit neural representation (INR). This allows us to model complex vascular structures with a representation that is implicit, continuous, light-weight, and easy to integrate with deep learning algorithms. We here demonstrate the potential of this approach with three practical examples. First, we obtain an accurate and watertight surface for an abdominal aortic aneurysm (AAA) from CT images and show robust fitting from as little as 200 points on the surface. Second, we simultaneously fit nested vessel walls in a single INR without intersections. Third, we show how 3D models of individual arteries can be smoothly blended into a single watertight surface. Our results show that INRs are a flexible representation with potential for minimally interactive annotation and manipulation of complex vascular structures.
Introduction
Accurate and patient-specific models of vascular systems are valuable for diagnosis, prognosis and treatment planning in patients with cardiovascular disease. Personalised vascular models might be used for stent-graft sizing in patients with abdominal aortic aneurysms <|cite_start|> (Reference: The benefits of EVAR planning using a 3D workstation.: ) <|cite_end|> or for computational fluid dynamics (CFD) <|cite_start|> (Reference: Patient-specific computational flow modelling for assessing hemodynamic changes following fenestrated endovascular aneurysm repair: ) <|cite_end|>.
However, extracting these models from medical image data can be cumbersome. Commercial software and open-source software packages <|cite_start|> (Reference: An image-based modeling framework for patient-specific computational hemodynamics: ) <|cite_end|> <|cite_start|> (Reference: {A re-engineered software interface and workflow for the open-source SimVascular cardiovascular modeling package: Patient-specific simulation plays an important role in cardiovascular disease research, diagnosis, surgical planning and medical device design, as well as education in cardiovascular biomechanics. simvascular is an open-source software package encompassing an entire cardiovascular modeling and simulation pipeline from image segmentation, three-dimensional (3D) solid modeling, and mesh generation, to patient-specific simulation and analysis. SimVascular is widely used for cardiovascular basic science and clinical research as well as education, following increased adoption by users and development of a GATEWAY web portal to facilitate educational access. Initial efforts of the project focused on replacing commercial packages with open-source alternatives and adding increased functionality for multiscale modeling, fluid-structure interaction (FSI), and solid modeling operations. In this paper, we introduce a major SimVascular (SV) release that includes a new graphical user interface (GUI) designed to improve user experience. Additional improvements include enhanced data/project management, interactive tools to facilitate user interaction, new boundary condition (BC) functionality, plug-in mechanism to increase modularity, a new 3D segmentation tool, and new computer-aided design (CAD)-based solid modeling capabilities. Here, we focus on major changes to the software platform and outline features added in this new release. We also briefly describe our recent experiences using SimVascular in the classroom for bioengineering education.) <|cite_end|> <|cite_start|> (Reference: CRIMSON: An open-source software framework for cardiovascular integrated modelling and simulation: In this work, we describe the CRIMSON (CardiovasculaR Integrated Modelling and SimulatiON) software environment. CRIMSON provides a powerful, customizable and user-friendly system for performing three-dimensional and reduced-order computational haemodynamics studies via a pipeline which involves: 1) segmenting vascular structures from medical images; 2) constructing analytic arterial and venous geometric models; 3) performing finite element mesh generation; 4) designing, and 5) applying boundary conditions; 6) running incompressible Navier-Stokes simulations of blood flow with fluid-structure interaction capabilities; and 7) post-processing and visualizing the results, including velocity, pressure and wall shear stress fields. A key aim of CRIMSON is to create a software environment that makes powerful computational haemodynamics tools accessible to a wide audience, including clinicians and students, both within our research laboratories and throughout the community. The overall philosophy is to leverage best-in-class open source standards for medical image processing, parallel flow computation, geometric solid modelling, data assimilation, and mesh generation. It is actively used by researchers in Europe, North and South America, Asia, and Australia. It has been applied to numerous clinical problems; we illustrate applications of CRIMSON to real-world problems using examples ranging from pre-operative surgical planning to medical device design optimization. CRIMSON binaries for Microsoft Windows 10, documentation and example input files are freely available for download from www.crimson.software, and the source code with compilation instructions is available on GitHub https://github.com/carthurs/CRIMSONFlowsolver (CRIMSON Flowsolver) under the GPL v3.0 license, and https://github.com/carthurs/CRIMSONGUI (CRIMSON GUI), under the AGPL v3.0 license. Support is available on the CRIMSON Google Groups forum, located at https://groups.google.com/forum/#!forum/crimson-users.) <|cite_end|> traditionally rely on the construction of tubular models <|cite_start|> (Reference: Splines as embeddings for generalized cylinders: ) <|cite_end|> in three steps. First, the lumen centerline is identified for each vessel. Then, local cross-sectional contours are determined and used to construct a watertight mesh model using (spline) interpolation. Finally, polygon mesh models of multiple vessels are blended to obtain a connected vascular tree. In this approach, tortuosity of the centerline can cause self-intersections of the orthogonal contours, resulting in surface folding of the final mesh model <|cite_start|> (Reference: Self-intersection avoidance and integral properties of generalized cylinders: ) <|cite_end|>. Moreover, smoothly connecting triangular meshes around bifurcations is challenging.
Deep learning has made great progress towards automatic model building from images <|cite_start|> (Reference: Deep learning for cardiac image segmentation: A review: Deep learning has become the most widely used approach for cardiac image segmentation in recent years. In this paper, we provide a review of over 100 cardiac image segmentation papers using deep learning, which covers common imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound (US) and major anatomical structures of interest (ventricles, atria and vessels). In addition, a summary of publicly available cardiac image datasets and code repositories are included to provide a base for encouraging reproducible research. Finally, we discuss the challenges and limitations with current deep learning-based approaches (scarcity of labels, model generalizability across different domains, interpretability) and suggest potential directions for future research.) <|cite_end|> <|cite_start|> (Reference: State-of-the-Art Deep Learning in Cardiovascular Image Analysis.: ) <|cite_end|>. However, popular convolutional neural network-based methods return 3D voxel masks. Because voxel masks merely discretize an underlying continuous shape, their quality heavily depends on the resolution of the image data, and they are not guaranteed to be contiguous. Hence, voxel masks typically require additional processing steps before use in, e.g., CFD.
There is a need for a vascular model shape representation that is continuous, modular, and can be easily integrated with existing deep learning methods.
\begin{figure}[t!]
\centering
\begin{subfigure}[t]{0.33\textwidth}
\includegraphics[width = \textwidth]{Images/voxel_nolines.png}
\caption{Voxel mask}
\label{voxelmask}
\end{subfigure}\begin{subfigure}[t]{0.33\textwidth}
\includegraphics[width = \textwidth]{Images/mesh_nolines.png}
\caption{Mesh}
\label{mesh}
\end{subfigure}\begin{subfigure}[t]{0.33\textwidth}
\includegraphics[width=\textwidth]{Images/SDF_test.png}
\caption{Signed distance function}
\label{sdf}
\end{subfigure}
\caption{Different representations of a 3D aortofemoral tree <|cite_start|> (Reference: The vascular model repository: a public resource of medical imaging data and blood flow simulation results: Patient-specific blood flow simulations may provide insight into disease progression, treatment options, and medical device design that would be difficult or impossible to obtain experimentally. However, publicly available image data and computer models for researchers and device designers are extremely limited. The National Heart, Lung, and Blood Institute sponsored Open Source Medical Software Corporation (contract nos. HHSN268200800008C and HHSN268201100035C) and its university collaborators to build a repository (www.vascularmodel.org) including realistic, image-based anatomic models and related hemodynamic simulation results to address this unmet need.) <|cite_end|>. \subref{voxelmask} and \subref{mesh} are \textit{explicit} representations; \subref{voxelmask} has non-smooth boundaries, whereas boundaries of \subref{mesh} are locally smooth. Both \subref{voxelmask} and \subref{mesh} are restricted to this resolution. \subref{sdf} \textit{implicitly} represents the surface with smooth boundaries, at any resolution.}
\label{fig:representations}
\end{figure}
In this work, we adapt the work of Gropp et al. <|cite_start|> (Reference: Implicit Geometric Regularization for Learning Shapes: Representing shapes as level sets of neural networks has been recently proved to be useful for different shape analysis and reconstruction tasks. So far, such representations were computed using either: (i) pre-computed implicit shape representations; or (ii) loss functions explicitly defined over the neural level sets. In this paper we offer a new paradigm for computing high fidelity implicit neural representations directly from raw data (i.e., point clouds, with or without normal information). We observe that a rather simple loss function, encouraging the neural network to vanish on the input point cloud and to have a unit norm gradient, possesses an implicit geometric regularization property that favors smooth and natural zero level set surfaces, avoiding bad zero-loss solutions. We provide a theoretical analysis of this property for the linear case, and show that, in practice, our method leads to state of the art implicit neural representations with higher level-of-details and fidelity compared to previous methods.) <|cite_end|> to model vascular systems as combinations of level sets of signed distance functions represented in differentiable neural networks.
Implicit representations and level sets have a substantial history in both segmentation <|cite_start|> (Reference: Level-Set Based Carotid Artery Segmentation for Stenosis Grading: ) <|cite_end|> <|cite_start|> (Reference: CURVES: Curve evolution for vessel segmentation: ) <|cite_end|> and 3D modeling <|cite_start|> (Reference: High-quality vascular modeling and modification with implicit extrusion surfaces for blood flow computations: ) <|cite_end|> <|cite_start|> (Reference: Interactive patient-specific vascular modeling with sweep surfaces: The precise modeling of vascular structures plays a key role in medical imaging applications, such as diagnosis, therapy planning and blood flow simulations. For the simulation of blood flow in particular, high-precision models are required to produce accurate results. It is thus common practice to perform extensive manual data polishing on vascular segmentations prior to simulation. This usually involves a complex tool chain which is highly impractical for clinical on-site application. To close this gap in current blood flow simulation pipelines, we present a novel technique for interactive vascular modeling which is based on implicit sweep surfaces. Our method is able to generate and correct smooth high-quality models based on geometric centerline descriptions on the fly. It supports complex vascular free-form contours and consequently allows for an accurate and fast modeling of pathological structures such as aneurysms or stenoses. We extend the concept of implicit sweep surfaces to achieve increased robustness and applicability as required in the medical field. We finally compare our method to existing techniques and provide case studies that confirm its contribution to current simulation pipelines.) <|cite_end|> of vascular structures. Recently, there have been significant advances in signal representations using neural networks, i.e., implicit neural representations (INRs) <|cite_start|> (Reference: NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis: We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\theta, \phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons.) <|cite_end|> <|cite_start|> (Reference: Implicit Neural Representations with Periodic Activation Functions: Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives. We analyze Siren activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine Sirens with hypernetworks to learn priors over the space of Siren functions.) <|cite_end|> <|cite_start|> (Reference: DeepSDF: Learning Continuous Signed Distance Functions for Shape Representation: Computer graphics, 3D computer vision and robotics communities have produced multiple approaches to representing 3D geometry for rendering and reconstruction. These provide trade-offs across fidelity, efficiency and compression capabilities. In this work, we introduce DeepSDF, a learned continuous Signed Distance Function (SDF) representation of a class of shapes that enables high quality shape representation, interpolation and completion from partial and noisy 3D input data. DeepSDF, like its classical counterpart, represents a shape's surface by a continuous volumetric field: the magnitude of a point in the field represents the distance to the surface boundary and the sign indicates whether the region is inside (-) or outside (+) of the shape, hence our representation implicitly encodes a shape's boundary as the zero-level-set of the learned function while explicitly representing the classification of space as being part of the shapes interior or not. While classical SDF's both in analytical or discretized voxel form typically represent the surface of a single shape, DeepSDF can represent an entire class of shapes. Furthermore, we show state-of-the-art performance for learned 3D shape representation and completion while reducing the model size by an order of magnitude compared with previous work.) <|cite_end|> <|cite_start|> (Reference: ACORN: Adaptive Coordinate Networks for Neural Scene Representation: Neural representations have emerged as a new paradigm for applications in rendering, imaging, geometric modeling, and simulation. Compared to traditional representations such as meshes, point clouds, or volumes they can be flexibly incorporated into differentiable learning-based pipelines. While recent improvements to neural representations now make it possible to represent signals with fine details at moderate resolutions (e.g., for images and 3D shapes), adequately representing large-scale or complex scenes has proven a challenge. Current neural representations fail to accurately represent images at resolutions greater than a megapixel or 3D scenes with more than a few hundred thousand polygons. Here, we introduce a new hybrid implicit-explicit network architecture and training strategy that adaptively allocates resources during training and inference based on the local complexity of a signal of interest. Our approach uses a multiscale block-coordinate decomposition, similar to a quadtree or octree, that is optimized during training. The network architecture operates in two stages: using the bulk of the network parameters, a coordinate encoder generates a feature grid in a single forward pass. Then, hundreds or thousands of samples within each block can be efficiently evaluated using a lightweight feature decoder. With this hybrid implicit-explicit network architecture, we demonstrate the first experiments that fit gigapixel images to nearly 40 dB peak signal-to-noise ratio. Notably this represents an increase in scale of over 1000x compared to the resolution of previously demonstrated image-fitting experiments. Moreover, our approach is able to represent 3D shapes significantly faster and better than previous techniques; it reduces training times from days to hours or minutes and memory requirements by over an order of magnitude.) <|cite_end|> <|cite_start|> (Reference: {Learning Shape Reconstruction from Sparse Measurements with Neural Implicit Functions: Reconstructing anatomical shapes from sparse or partial measurements relies on prior knowledge of shape variations that occur within a given population. Such shape priors are learned from example shapes, obtained by segmenting volumetric medical images. For existing models, the resolution of a learned shape prior is limited to the resolution of the training data. However, in clinical practice, volumetric images are often acquired with highly anisotropic voxel sizes, e.g. to reduce image acquisition time in MRI or radiation exposure in CT imaging. The missing shape information between the slices prohibits existing methods to learn a high-resolution shape prior. We introduce a method for high-resolution shape reconstruction from sparse measurements without relying on high-resolution ground truth for training. Our method is based on neural implicit shape representations and learns a continuous shape prior only from highly anisotropic segmentations. Furthermore, it is able to learn from shapes with a varying field of view and can reconstruct from various) <|cite_end|>.
Continuous implicit neural representations of the signed distance function can be easily transformed into an explicit representation, while the inverse is not true (Fig. \ref{fig:representations}).
We here demonstrate that INRs are a potentially valuable tool to bridge the gap between vascular modeling and deep learning. First, we show how INRs can be used to reconstruct an implicit surface from a sparse point cloud and form an alternative to conventional annotation procedures. We evaluate the efficiency and robustness of this approach.
Secondly, we show that we can use a single INR to represent multiple surfaces, and demonstrate the effectiveness on nested shapes in an AAA case study.
Finally, we demonstrate the added value of implicit shape representations in the smooth blending of separate structures in the reconstruction of an aortofemoral tree. <|paper_end|> | [
"<|reference_start|> Patient-specific computational flow modelling for assessing hemodynamic changes following fenestrated endovascular aneurysm repair: <|reference_end|>",
"<|reference_start|> CRIMSON: An open-source software framework for cardiovascular integrated modelling and simulation: In this work, we describe the CRIMSON (CardiovasculaR Integrated Modelling and SimulatiON) software environment. CRIMSON provides a powerful, customizable and user-friendly system for performing three-dimensional and reduced-order computational haemodynamics studies via a pipeline which involves: 1) segmenting vascular structures from medical images; 2) constructing analytic arterial and venous geometric models; 3) performing finite element mesh generation; 4) designing, and 5) applying boundary conditions; 6) running incompressible Navier-Stokes simulations of blood flow with fluid-structure interaction capabilities; and 7) post-processing and visualizing the results, including velocity, pressure and wall shear stress fields. A key aim of CRIMSON is to create a software environment that makes powerful computational haemodynamics tools accessible to a wide audience, including clinicians and students, both within our research laboratories and throughout the community. The overall philosophy is to leverage best-in-class open source standards for medical image processing, parallel flow computation, geometric solid modelling, data assimilation, and mesh generation. It is actively used by researchers in Europe, North and South America, Asia, and Australia. It has been applied to numerous clinical problems; we illustrate applications of CRIMSON to real-world problems using examples ranging from pre-operative surgical planning to medical device design optimization. CRIMSON binaries for Microsoft Windows 10, documentation and example input files are freely available for download from www.crimson.software, and the source code with compilation instructions is available on GitHub https://github.com/carthurs/CRIMSONFlowsolver (CRIMSON Flowsolver) under the GPL v3.0 license, and https://github.com/carthurs/CRIMSONGUI (CRIMSON GUI), under the AGPL v3.0 license. Support is available on the CRIMSON Google Groups forum, located at https://groups.google.com/forum/#!forum/crimson-users. <|reference_end|>",
"<|reference_start|> NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis: We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes by optimizing an underlying continuous volumetric scene function using a sparse set of input views. Our algorithm represents a scene using a fully-connected (non-convolutional) deep network, whose input is a single continuous 5D coordinate (spatial location $(x,y,z)$ and viewing direction $(\\theta, \\phi)$) and whose output is the volume density and view-dependent emitted radiance at that spatial location. We synthesize views by querying 5D coordinates along camera rays and use classic volume rendering techniques to project the output colors and densities into an image. Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses. We describe how to effectively optimize neural radiance fields to render photorealistic novel views of scenes with complicated geometry and appearance, and demonstrate results that outperform prior work on neural rendering and view synthesis. View synthesis results are best viewed as videos, so we urge readers to view our supplementary video for convincing comparisons. <|reference_end|>",
"<|reference_start|> Implicit Neural Representations with Periodic Activation Functions: Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or Sirens, are ideally suited for representing complex natural signals and their derivatives. We analyze Siren activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how Sirens can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine Sirens with hypernetworks to learn priors over the space of Siren functions. <|reference_end|>"
] | [
1,
4,
15,
16
] | {"<|cite_1|>": "ss-782543", "<|cite_2|>": "ss-782544", "<|multi_cite_3_1|>": "ss-2425215", "<|multi_cite_3_2|>": "ss-782545", "<|multi_cite_3_3|>": "ss-719744", "<|cite_4|>": "ss-782546", "<|cite_5|>": "ss-782547", "<|multi_cite_6_1|>": "arxiv-233262", "<|multi_cite_6_2|>": "ss-682086", "<|cite_7|>": "ss-782548", "<|cite_8|>": "arxiv-249981", "<|multi_cite_9_1|>": "ss-782549", "<|multi_cite_9_2|>": "ss-782550", "<|multi_cite_10_1|>": "ss-782551", "<|multi_cite_10_2|>": "ss-782552", "<|multi_cite_11_1|>": "arxiv-254624", "<|multi_cite_11_2|>": "arxiv-272504", "<|multi_cite_11_3|>": "arxiv-187680", "<|multi_cite_11_4|>": "arxiv-339278", "<|multi_cite_11_5|>": "ss-782553"} |
2311.05106 | <|paper_start|> Title: A differentiable brain simulator bridging brain simulation and brain-inspired computing
Abstract: A differentiable brain simulator bridging brain simulation and brain-inspired computing: Brain simulation builds dynamical models to mimic the structure and functions of the brain, while brain-inspired computing (BIC) develops intelligent systems by learning from the structure and functions of the brain. The two fields are intertwined and should share a common programming framework to facilitate each other's development. However, none of the existing software in the fields can achieve this goal, because traditional brain simulators lack differentiability for training, while existing deep learning (DL) frameworks fail to capture the biophysical realism and complexity of brain dynamics. In this paper, we introduce BrainPy, a differentiable brain simulator developed using JAX and XLA, with the aim of bridging the gap between brain simulation and BIC. BrainPy expands upon the functionalities of JAX, a powerful AI framework, by introducing complete capabilities for flexible, efficient, and scalable brain simulation. It offers a range of sparse and event-driven operators for efficient and scalable brain simulation, an abstraction for managing the intricacies of synaptic computations, a modular and flexible interface for constructing multi-scale brain models, and an object-oriented just-in-time compilation approach to handle the memory-intensive nature of brain dynamics. We showcase the efficiency and scalability of BrainPy on benchmark tasks, highlight its differentiable simulation for biologically plausible spiking models, and discuss its potential to support research at the intersection of brain simulation and BIC.
Introduction
\label{introduction}
\vspace{-0.1 em}
Brain simulation aims to elucidate brain functions by building dynamical models that mimic the structure and dynamics of the brain <|cite_start|> (Reference: Neuronal dynamics: From single neurons to networks and
models of cognition: What happens in our brain when we make a decision? What triggers a neuron to send out a signal? What is the neural code? This textbook for advanced undergraduate and beginning graduate students provides a thorough and up-to-date introduction to the fields of computational and theoretical neuroscience. It covers classical topics, including the Hodgkin-Huxley equations and Hopfield model, as well as modern developments in the field such as Generalized Linear Models and decision theory. Concepts are introduced using clear step-by-step explanations suitable for readers with only a basic knowledge of differential equations and probabilities, and are richly illustrated by figures and worked-out examples. End-of-chapter summaries and classroom-tested exercises make the book ideal for courses or for self-study. The authors also give pointers to the literature and an extensive bibliography, which will prove invaluable to readers interested in further study.) <|cite_end|>, while brain-inspired computing aims to develop intelligent systems by learning from the structure and computational principles of the brain <|cite_start|> (Reference: Brain-inspired computing needs a master plan: ) <|cite_end|>. The two fields are intertwined and their developments can facilitate each other. For example, brain simulation can provide BIC with models of neurons, synapses, networks, and inspirational information processing principles; while BIC can provide brain simulation with efficient algorithms for optimizing model parameters, simulation tools for running large-scale networks, and a testing bed for validating hypothesized neural mechanisms.
Ideally, the two fields should share a common programming framework, so that they can benefit from each other's development by sharing models, mathematical tools, and emerging findings.
However, up to now, none of the existing software in the two fields can fully achieve this goal. Traditional brain simulators, such as NEURON <|cite_start|> (Reference: {The NEURON simulation environment: The moment-to-moment processing of information by the nervous system involves the propagation and interaction of electrical and chemical signals that are distributed in space and time. Biologically realistic modeling is needed to test hypotheses about the mechanisms that govern these signals and how nervous system function emerges from the operation of these mechanisms. The NEURON simulation program provides a powerful and flexible environment for implementing such models of individual neurons and small networks of neurons. It is particularly useful when membrane potential is nonuniform and membrane currents are complex. We present the basic ideas that would help informed users make the most efficient use of NEURON.) <|cite_end|>, NEST <|cite_start|> (Reference: NEST (NEural Simulation Tool): ) <|cite_end|>, Brian/Brian2 <|cite_start|> (Reference: Brian: A Simulator for Spiking Neural Networks in Python: ) <|cite_end|> <|cite_start|> (Reference: Brian 2, an intuitive and efficient neural simulator: To be maximally useful for neuroscience research, neural simulators must make it possible to define original models. This is especially important because a computational experiment might not only need descriptions of neurons and synapses, but also models of interactions with the environment (e.g. muscles), or the environment itself. To preserve high performance when defining new models, current simulators offer two options: low-level programming, or mark-up languages (and other domain specific languages). The first option requires time and expertise, is prone to errors, and contributes to problems with reproducibility and replicability. The second option has limited scope, since it can only describe the range of neural models covered by the ontology. Other aspects of a computational experiment, such as the stimulation protocol, cannot be expressed within this framework. “Brian” 2 is a complete rewrite of Brian that addresses this issue by using runtime code generation with a procedural equation-oriented approach. Brian 2 enables scientists to write code that is particularly simple and concise, closely matching the way they conceptualise their models, while the technique of runtime code generation automatically transforms high level descriptions of models into efficient low level code tailored to different hardware (e.g. CPU or GPU). We illustrate it with several challenging examples: a plastic model of the pyloric network of crustaceans, a closed-loop sensorimotor model, programmatic exploration of a neuron model, and an auditory model with real-time input from a microphone.) <|cite_end|> and PyNN <|cite_start|> (Reference: Neuroinformatics Original Research Article Pynn: a Common Interface for Neuronal Network Simulators: Computational neuroscience has produced a diversity of software for simulations of networks of spiking neurons, with both negative and positive consequences. On the one hand, each simulator uses its own programming or confi guration language, leading to considerable diffi culty in porting models from one simulator to another. This impedes communication between investigators and makes it harder to reproduce and build on the work of others. On the other hand, simulation results can be cross-checked between different simulators, giving greater confi dence in their correctness, and each simulator has different optimizations, so the most appropriate simulator can be chosen for a given modelling task. A common programming interface to multiple simulators would reduce or eliminate the problems of simulator diversity while retaining the benefi ts. PyNN is such an interface, making it possible to write a simulation script once, using the Python programming language, and run it without modifi cation on any supported simulator (currently NEURON, NEST, PCSIM, Brian and the Heidelberg VLSI neuromorphic hardware). PyNN increases the productivity of neuronal network modelling by providing high-level abstraction, by promoting code sharing and reuse, and by providing a foundation for simulator-agnostic analysis, visualization and data-management tools. PyNN increases the reliability of modelling studies by making it much easier to check results on multiple simulators. PyNN is open-source software and is available from http://neuralensemble.org/PyNN. compiler standards and simulators develop. Another is that model source code is often not written with reuse and extension in mind, and so considerable rewriting to modularize the code is necessary. Probably the most important barrier is that code written for one simulator is not compatible with any other simulator. Although many computational models in neuroscience are written from the ground up in a general purpose programming language such as C++ or Fortran, probably the majority use a special purpose simulator that allows models to be expressed in terms of neuroscience-specifi c concepts such as neurons, ion channels, synapses; the simulator takes care of translating these concepts into a system of equations and of numerically solving the equations. A large number of such simulators are available (reviewed in Brette et al., 2007), mostly as open-source software, and each has its own programming language, confi guration syntax and/or graphi-cal interface, which creates considerable diffi culty in translating models from one simulator to another, or even in understanding someone else's code, with obvious negative consequences for communication between investigators, reproducibility …) <|cite_end|>, are designed for simulating brain dynamics models with high fidelity and accuracy. They rely on customized numerical solvers and data structures that are not compatible with automatic differentiation, and hence cannot support training models with standard gradient-based methods. On the other hand, by leveraging the automatic differentiation functionality of deep learning (DL) frameworks like PyTorch <|cite_start|> (Reference: PyTorch: An Imperative Style, High-Performance Deep Learning Library: Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks.) <|cite_end|> and TensorFlow <|cite_start|> (Reference: TensorFlow: A system for large-scale machine learning: TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous "parameter server" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with particularly strong support for training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model in contrast to existing systems, and demonstrate the compelling performance that TensorFlow achieves for several real-world applications.) <|cite_end|>, existing BIC libraries, such as snnTorch <|cite_start|> (Reference: Training Spiking Neural Networks Using Lessons From Deep Learning: The brain is the perfect place to look for inspiration to develop more efficient neural networks. The inner workings of our synapses and neurons provide a glimpse at what the future of deep learning might look like. This paper serves as a tutorial and perspective showing how to apply the lessons learnt from several decades of research in deep learning, gradient descent, backpropagation and neuroscience to biologically plausible spiking neural neural networks. We also explore the delicate interplay between encoding data as spikes and the learning process; the challenges and solutions of applying gradient-based learning to spiking neural networks (SNNs); the subtle link between temporal backpropagation and spike timing dependent plasticity, and how deep learning might move towards biologically plausible online learning. Some ideas are well accepted and commonly used amongst the neuromorphic engineering community, while others are presented or justified for the first time here. The fields of deep learning and spiking neural networks evolve very rapidly. We endeavour to treat this document as a 'dynamic' manuscript that will continue to be updated as the common practices in training SNNs also change. A series of companion interactive tutorials complementary to this paper using our Python package, snnTorch, are also made available. See https://snntorch.readthedocs.io/en/latest/tutorials/index.html .) <|cite_end|>, Norse <|cite_start|> (Reference: Norse - A deep learning library for spiking neural networks: ) <|cite_end|>, and SpikingJelly, provide convenient interfaces for building and training spike neural networks (SNNs). They are, however, not designed to capture the unique and important features of brain dynamics, and hence are not suitable to simulate large-scale brain networks with realistic biophysical properties.
In this paper, we propose BrainPy as an innovative solution to bridge this gap. Unlike traditional brain simulators, BrainPy leverages the power of the JAX <|cite_start|> (Reference: {Compiling Machine Learning Programs via High-Level Tracing: We describe JAX, a domain-specific tracing JIT compiler for gen-erating high-performance accelerator code from pure Python and Numpy machine learning programs. JAX uses the XLA compiler infrastructure to generate optimized code for the program subroutines that are most favorable for acceleration, and these optimized subroutines can be called and orchestrated by arbitrary Python. Because the system is fully compatible with Autograd, it allows forward- and reverse-mode automatic differentiation of Python functions to arbitrary order. Because JAX supports structured control flow, it can generate code for sophisticated machine learning algorithms while maintaining high performance. We show that by combining JAX with Autograd and Numpy we get an easily pro-grammable and highly performant ML system that targets CPUs, GPUs, and TPUs, capable of scaling to multi-core Cloud TPUs.) <|cite_end|>, allowing seamless integration with AI models. However, BrainPy goes beyond integration and introduces dedicated optimizations that unleash the full potential of a flexible, efficient, and scalable brain simulator within the JAX ecosystem. To capture the sparse and event-driven nature of brain computation, BrainPy provides a wide range of customized primitive operators. For enhanced flexibility in model construction across various brain organization scales, BrainPy offers a modular and composable interface. To handle the complexity of synaptic computations, BrainPy introduces a novel abstraction for executing diverse synaptic projections. Additionally, to tackle the memory-intensive demands of brain dynamics, BrainPy employs an object-oriented just-in-time (JIT) compilation approach. Leveraging automatic differentiation capabilities of JAX, BrainPy represents a unique differentiable brain simulator that bridges the gap between brain simulation and BIC fields. We demonstrate the efficiency and scalability of BrainPy on several brain simulation and BIC tasks and showcase its ability to train biologically plausible spiking models with differentiability.
Related Work
\label{related_work}
\vspace{-0.1 em}
\textbf{Brain Simulators.} Different brain simulators normally have different focuses. NEURON <|cite_start|> (Reference: {The NEURON simulation environment: The moment-to-moment processing of information by the nervous system involves the propagation and interaction of electrical and chemical signals that are distributed in space and time. Biologically realistic modeling is needed to test hypotheses about the mechanisms that govern these signals and how nervous system function emerges from the operation of these mechanisms. The NEURON simulation program provides a powerful and flexible environment for implementing such models of individual neurons and small networks of neurons. It is particularly useful when membrane potential is nonuniform and membrane currents are complex. We present the basic ideas that would help informed users make the most efficient use of NEURON.) <|cite_end|> allows users to define detailed biophysical models of neurons and synapses, with complex morphology and ion channels. NEST <|cite_start|> (Reference: NEST (NEural Simulation Tool): ) <|cite_end|> focuses on large-scale network models of point neurons and synapses, with simplified dynamics and connectivity patterns. Brian2 <|cite_start|> (Reference: Equation-oriented specification of neural models for simulations: Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator.) <|cite_end|> <|cite_start|> (Reference: Brian 2, an intuitive and efficient neural simulator: To be maximally useful for neuroscience research, neural simulators must make it possible to define original models. This is especially important because a computational experiment might not only need descriptions of neurons and synapses, but also models of interactions with the environment (e.g. muscles), or the environment itself. To preserve high performance when defining new models, current simulators offer two options: low-level programming, or mark-up languages (and other domain specific languages). The first option requires time and expertise, is prone to errors, and contributes to problems with reproducibility and replicability. The second option has limited scope, since it can only describe the range of neural models covered by the ontology. Other aspects of a computational experiment, such as the stimulation protocol, cannot be expressed within this framework. “Brian” 2 is a complete rewrite of Brian that addresses this issue by using runtime code generation with a procedural equation-oriented approach. Brian 2 enables scientists to write code that is particularly simple and concise, closely matching the way they conceptualise their models, while the technique of runtime code generation automatically transforms high level descriptions of models into efficient low level code tailored to different hardware (e.g. CPU or GPU). We illustrate it with several challenging examples: a plastic model of the pyloric network of crustaceans, a closed-loop sensorimotor model, programmatic exploration of a neuron model, and an auditory model with real-time input from a microphone.) <|cite_end|> targets being flexible and intuitive, allowing users to easily define dynamical models, environment interactions, and experimental protocols. Currently, the dominant programming approach in brain simulation is descriptive language <|cite_start|> (Reference: Code Generation in Computational Neuroscience: A Review of Tools and Techniques: Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them.) <|cite_end|>, by which users can use text <|cite_start|> (Reference: Brian 2, an intuitive and efficient neural simulator: To be maximally useful for neuroscience research, neural simulators must make it possible to define original models. This is especially important because a computational experiment might not only need descriptions of neurons and synapses, but also models of interactions with the environment (e.g. muscles), or the environment itself. To preserve high performance when defining new models, current simulators offer two options: low-level programming, or mark-up languages (and other domain specific languages). The first option requires time and expertise, is prone to errors, and contributes to problems with reproducibility and replicability. The second option has limited scope, since it can only describe the range of neural models covered by the ontology. Other aspects of a computational experiment, such as the stimulation protocol, cannot be expressed within this framework. “Brian” 2 is a complete rewrite of Brian that addresses this issue by using runtime code generation with a procedural equation-oriented approach. Brian 2 enables scientists to write code that is particularly simple and concise, closely matching the way they conceptualise their models, while the technique of runtime code generation automatically transforms high level descriptions of models into efficient low level code tailored to different hardware (e.g. CPU or GPU). We illustrate it with several challenging examples: a plastic model of the pyloric network of crustaceans, a closed-loop sensorimotor model, programmatic exploration of a neuron model, and an auditory model with real-time input from a microphone.) <|cite_end|> <|cite_start|> (Reference: Annarchy: a code generation approach to neural simulations on parallel hardware: Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions.) <|cite_end|>, JSON <|cite_start|> (Reference: Brain Modeling ToolKit: An open source software suite for multiscale modeling of brain circuits: Experimental studies in neuroscience are producing data at a rapidly increasing rate, providing exciting opportunities and formidable challenges to existing theoretical and modeling approaches. To turn massive datasets into predictive quantitative frameworks, the field needs software solutions for systematic integration of data into realistic, multiscale models. Here we describe the Brain Modeling ToolKit (BMTK), a software suite for building models and performing simulations at multiple levels of resolution, from biophysically detailed multi-compartmental, to point-neuron, to population-statistical approaches. Leveraging the SONATA file format and existing software such as NEURON, NEST, and others, BMTK offers consistent user experience across multiple levels of resolution. It permits highly sophisticated simulations to be set up with little coding required, thus lowering entry barriers to new users. We illustrate successful applications of BMTK to large-scale simulations of a cortical area. BMTK is an open-source package provided as a resource supporting modeling-based discovery in the community.) <|cite_end|> <|cite_start|> (Reference: Netpyne, a tool for data-driven multiscale modeling of brain circuits: Biophysical modeling of neuronal networks helps to integrate and interpret rapidly growing and disparate experimental datasets at multiple scales. The NetPyNE tool (www.netpyne.org) provides both programmatic and graphical interfaces to develop data-driven multiscale network models in NEURON. NetPyNE clearly separates model parameters from implementation code. Users provide specifications at a high level via a standardized declarative language, for example connectivity rules, to create millions of cell-to-cell connections. NetPyNE then enables users to generate the NEURON network, run efficiently parallelized simulations, optimize and explore network parameters through automated batch runs, and use built-in functions for visualization and analysis – connectivity matrices, voltage traces, spike raster plots, local field potentials, and information theoretic measures. NetPyNE also facilitates model sharing by exporting and importing standardized formats (NeuroML and SONATA). NetPyNE is already being used to teach computational neuroscience students and by modelers to investigate brain regions and phenomena.) <|cite_end|>, or XML <|cite_start|> (Reference: NeuroML: A Language for Describing Data Driven Models of Neurons and Networks with a High Degree of Biological Detail: Biologically detailed single neuron and network models are important for understanding how ion channels, synapses and anatomical connectivity underlie the complex electrical behavior of the brain. While neuronal simulators such as NEURON, GENESIS, MOOSE, NEST, and PSICS facilitate the development of these data-driven neuronal models, the specialized languages they employ are generally not interoperable, limiting model accessibility and preventing reuse of model components and cross-simulator validation. To overcome these problems we have used an Open Source software approach to develop NeuroML, a neuronal model description language based on XML (Extensible Markup Language). This enables these detailed models and their components to be defined in a standalone form, allowing them to be used across multiple simulators and archived in a standardized format. Here we describe the structure of NeuroML and demonstrate its scope by converting into NeuroML models of a number of different voltage- and ligand-gated conductances, models of electrical coupling, synaptic transmission and short-term plasticity, together with morphologically detailed models of individual neurons. We have also used these NeuroML-based components to develop an highly detailed cortical network model. NeuroML-based model descriptions were validated by demonstrating similar model behavior across five independently developed simulators. Although our results confirm that simulations run on different simulators converge, they reveal limits to model interoperability, by showing that for some models convergence only occurs at high levels of spatial and temporal discretisation, when the computational overhead is high. Our development of NeuroML as a common description language for biophysically detailed neuronal and network models enables interoperability across multiple simulation environments, thereby improving model transparency, accessibility and reuse in computational neuroscience.) <|cite_end|> files to describe the model, and then translate the descriptive information into high-efficient C++ or CUDA code. The main advantage of this approach is the clear decoupling between mathematical description from its implementation details. However, this advantage comes with expensive costs, which include, for instance, the lack of flexibility and generality of defining new models not covered by the predefined constructs and functions of the custom language, the difficulty of integrating and interfacing with other tools and frameworks not using the same format, and the high learning cost for the unfamiliar syntax of the custom language. These limitations prevent the application of existing brain simulators to BIC models.
\textbf{BIC Libraries.} SNNs are the current dominating models in BIC for their advantages in biological interoperability and energy efficiency.
A number of programming libraries have been developed for SNNs, such as NengoDL <|cite_start|> (Reference: NengoDL: Combining deep learning and neuromorphic modelling methods: NengoDL is a software framework designed to combine the strengths of neuromorphic modelling and deep learning. NengoDL allows users to construct biologically detailed neural models, intermix those models with deep learning elements (such as convolutional networks), and then efficiently simulate those models in an easy-to-use, unified framework. In addition, NengoDL allows users to apply deep learning training methods to optimize the parameters of biological neural models. In this paper we present basic usage examples, benchmarking, and details on the key implementation elements of NengoDL. More details can be found at https://www.nengo.ai/nengo-dl .) <|cite_end|>, BindsNet <|cite_start|> (Reference: BindsNET: A machine learning-oriented spiking neural networks library in Python: The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared towards machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on top of the PyTorch deep neural networks library, enabling fast CPU and GPU computation for large spiking networks. The BindsNET framework can be adjusted to meet the needs of other existing computing and hardware environments, e.g., TensorFlow. We also provide an interface into the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning problems. We argue that this package facilitates the use of spiking networks for large-scale machine learning experimentation, and show some simple examples of how we envision BindsNET can be used in practice. BindsNET code is available at https://github.com/Hananel-Hazan/bindsnet) <|cite_end|>, snnTorch <|cite_start|> (Reference: Training Spiking Neural Networks Using Lessons From Deep Learning: The brain is the perfect place to look for inspiration to develop more efficient neural networks. The inner workings of our synapses and neurons provide a glimpse at what the future of deep learning might look like. This paper serves as a tutorial and perspective showing how to apply the lessons learnt from several decades of research in deep learning, gradient descent, backpropagation and neuroscience to biologically plausible spiking neural neural networks. We also explore the delicate interplay between encoding data as spikes and the learning process; the challenges and solutions of applying gradient-based learning to spiking neural networks (SNNs); the subtle link between temporal backpropagation and spike timing dependent plasticity, and how deep learning might move towards biologically plausible online learning. Some ideas are well accepted and commonly used amongst the neuromorphic engineering community, while others are presented or justified for the first time here. The fields of deep learning and spiking neural networks evolve very rapidly. We endeavour to treat this document as a 'dynamic' manuscript that will continue to be updated as the common practices in training SNNs also change. A series of companion interactive tutorials complementary to this paper using our Python package, snnTorch, are also made available. See https://snntorch.readthedocs.io/en/latest/tutorials/index.html .) <|cite_end|>, Norse <|cite_start|> (Reference: Norse - A deep learning library for spiking neural networks: ) <|cite_end|>, SpikingJelly, and BrainCog <|cite_start|> (Reference: BrainCog: A Spiking Neural Network based Brain-inspired Cognitive Intelligence Engine for Brain-inspired AI and Brain Simulation: Spiking neural networks (SNNs) have attracted extensive attentions in Brain-inspired Artificial Intelligence and computational neuroscience. They can be used to simulate biological information processing in the brain at multiple scales. More importantly, SNNs serve as an appropriate level of abstraction to bring inspirations from brain and cognition to Artificial Intelligence. In this paper, we present the Brain-inspired Cognitive Intelligence Engine (BrainCog) for creating brain-inspired AI and brain simulation models. BrainCog incorporates different types of spiking neuron models, learning rules, brain areas, etc., as essential modules provided by the platform. Based on these easy-to-use modules, BrainCog supports various brain-inspired cognitive functions, including Perception and Learning, Decision Making, Knowledge Representation and Reasoning, Motor Control, and Social Cognition. These brain-inspired AI models have been effectively validated on various supervised, unsupervised, and reinforcement learning tasks, and they can be used to enable AI models to be with multiple brain-inspired cognitive functions. For brain simulation, BrainCog realizes the function simulation of decision-making, working memory, the structure simulation of the Neural Circuit, and whole brain structure simulation of Mouse brain, Macaque brain, and Human brain. An AI engine named BORN is developed based on BrainCog, and it demonstrates how the components of BrainCog can be integrated and used to build AI models and applications. To enable the scientific quest to decode the nature of biological intelligence and create AI, BrainCog aims to provide essential and easy-to-use building blocks, and infrastructural support to develop brain-inspired spiking neural network based AI, and to simulate the cognitive brains at multiple scales. The online repository of BrainCog can be found at https://github.com/braincog-x.) <|cite_end|>. These libraries utilize DL frameworks, such as PyTorch <|cite_start|> (Reference: PyTorch: An Imperative Style, High-Performance Deep Learning Library: Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other popular scientific computing libraries, while remaining efficient and supporting hardware accelerators such as GPUs. In this paper, we detail the principles that drove the implementation of PyTorch and how they are reflected in its architecture. We emphasize that every aspect of PyTorch is a regular Python program under the full control of its user. We also explain how the careful and pragmatic implementation of the key components of its runtime enables them to work together to achieve compelling performance. We demonstrate the efficiency of individual subsystems, as well as the overall speed of PyTorch on several common benchmarks.) <|cite_end|> and TensorFlow <|cite_start|> (Reference: TensorFlow: A system for large-scale machine learning: TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous "parameter server" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with particularly strong support for training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model in contrast to existing systems, and demonstrate the compelling performance that TensorFlow achieves for several real-world applications.) <|cite_end|>, to achieve the training of SNNs on various tasks that cannot be done in traditional brain simulators. So far, BIC libraries have mainly focused on the combination of spiking neurons with DL models, e.g., spiking convolutional neural networks and spiking recurrent neural networks. However, these libraries fall short of high-fidelity brain simulation. First, DL frameworks lack the dedicated components for sparse, event-driven, and scalable computation required for brain dynamics models. Second, BIC libraries designed for machine learning tasks often lack the essential capabilities to support realistic neuronal and synaptic simulations based on experimental data. The brain encompasses intricate biochemical and biophysical processes that span vast scales in both space and time. Unfortunately, current BIC libraries without these dedicated optimizations face significant challenges in accurately modeling such complex biophysical characteristics. <|paper_end|> | [
"<|reference_start|> Neuronal dynamics: From single neurons to networks and\nmodels of cognition: What happens in our brain when we make a decision? What triggers a neuron to send out a signal? What is the neural code? This textbook for advanced undergraduate and beginning graduate students provides a thorough and up-to-date introduction to the fields of computational and theoretical neuroscience. It covers classical topics, including the Hodgkin-Huxley equations and Hopfield model, as well as modern developments in the field such as Generalized Linear Models and decision theory. Concepts are introduced using clear step-by-step explanations suitable for readers with only a basic knowledge of differential equations and probabilities, and are richly illustrated by figures and worked-out examples. End-of-chapter summaries and classroom-tested exercises make the book ideal for courses or for self-study. The authors also give pointers to the literature and an extensive bibliography, which will prove invaluable to readers interested in further study. <|reference_end|>",
"<|reference_start|> {Compiling Machine Learning Programs via High-Level Tracing: We describe JAX, a domain-specific tracing JIT compiler for gen-erating high-performance accelerator code from pure Python and Numpy machine learning programs. JAX uses the XLA compiler infrastructure to generate optimized code for the program subroutines that are most favorable for acceleration, and these optimized subroutines can be called and orchestrated by arbitrary Python. Because the system is fully compatible with Autograd, it allows forward- and reverse-mode automatic differentiation of Python functions to arbitrary order. Because JAX supports structured control flow, it can generate code for sophisticated machine learning algorithms while maintaining high performance. We show that by combining JAX with Autograd and Numpy we get an easily pro-grammable and highly performant ML system that targets CPUs, GPUs, and TPUs, capable of scaling to multi-core Cloud TPUs. <|reference_end|>",
"<|reference_start|> BindsNET: A machine learning-oriented spiking neural networks library in Python: The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared towards machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on top of the PyTorch deep neural networks library, enabling fast CPU and GPU computation for large spiking networks. The BindsNET framework can be adjusted to meet the needs of other existing computing and hardware environments, e.g., TensorFlow. We also provide an interface into the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning problems. We argue that this package facilitates the use of spiking networks for large-scale machine learning experimentation, and show some simple examples of how we envision BindsNET can be used in practice. BindsNET code is available at https://github.com/Hananel-Hazan/bindsnet <|reference_end|>",
"<|reference_start|> TensorFlow: A system for large-scale machine learning: TensorFlow is a machine learning system that operates at large scale and in heterogeneous environments. TensorFlow uses dataflow graphs to represent computation, shared state, and the operations that mutate that state. It maps the nodes of a dataflow graph across many machines in a cluster, and within a machine across multiple computational devices, including multicore CPUs, general-purpose GPUs, and custom designed ASICs known as Tensor Processing Units (TPUs). This architecture gives flexibility to the application developer: whereas in previous \"parameter server\" designs the management of shared state is built into the system, TensorFlow enables developers to experiment with novel optimizations and training algorithms. TensorFlow supports a variety of applications, with particularly strong support for training and inference on deep neural networks. Several Google services use TensorFlow in production, we have released it as an open-source project, and it has become widely used for machine learning research. In this paper, we describe the TensorFlow dataflow model in contrast to existing systems, and demonstrate the compelling performance that TensorFlow achieves for several real-world applications. <|reference_end|>"
] | [
0,
11,
23,
28
] | {"<|cite_1|>": "ss-1114179", "<|cite_2|>": "ss-1366997", "<|cite_3|>": "ss-847037", "<|cite_4|>": "ss-1359215", "<|multi_cite_5_1|>": "ss-817594", "<|multi_cite_5_2|>": "ss-1302493", "<|cite_6|>": "ss-796891", "<|cite_7|>": "arxiv-237639", "<|cite_8|>": "arxiv-98825", "<|cite_9|>": "arxiv-369664", "<|cite_10|>": "ss-890803", "<|cite_12|>": "ss-2420589", "<|cite_13|>": "ss-847037", "<|cite_14|>": "ss-1359215", "<|multi_cite_15_1|>": "ss-2329025", "<|multi_cite_15_2|>": "ss-1302493", "<|cite_16|>": "ss-847042", "<|multi_cite_17_1|>": "ss-1302493", "<|multi_cite_17_2|>": "ss-2329026", "<|multi_cite_18_1|>": "ss-1527487", "<|multi_cite_18_2|>": "ss-699344", "<|cite_19|>": "ss-1563518", "<|cite_20|>": "arxiv-160348", "<|cite_21|>": "arxiv-161258", "<|cite_22|>": "arxiv-369664", "<|cite_23|>": "ss-890803", "<|cite_25|>": "arxiv-434579", "<|cite_26|>": "arxiv-237639", "<|cite_27|>": "arxiv-98825"} |
2405.18915 | <|paper_start|> Title: Towards Faithful Chain-of-Thought: Large Language Models are Bridging Reasoners
Abstract: Towards Faithful Chain-of-Thought: Large Language Models are Bridging Reasoners: Large language models (LLMs) suffer from serious unfaithful chain-of-thought (CoT) issues. Previous work attempts to measure and explain it but lacks in-depth analysis within CoTs and does not consider the interactions among all reasoning components jointly. In this paper, we first study the CoT faithfulness issue at the granularity of CoT steps, identify two reasoning paradigms: centralized reasoning and distributed reasoning, and find their relationship with faithfulness. Subsequently, we conduct a joint analysis of the causal relevance among the context, CoT, and answer during reasoning. The result proves that, when the LLM predicts answers, it can recall correct information missing in the CoT from the context, leading to unfaithfulness issues. Finally, we propose the inferential bridging method to mitigate this issue, in which we use the attribution method to recall information as hints for CoT generation and filter out noisy CoTs based on their semantic consistency and attribution scores. Extensive experiments demonstrate that our approach effectively alleviates the unfaithful CoT problem.
Introduction
Reasoning is one of the key capabilities that large language models (LLM) must develop on the path towards artificial general intelligence <|cite_start|> (Reference: Llama 2: Open Foundation and Fine-Tuned Chat Models: In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.) <|cite_end|> <|cite_start|> (Reference: Mistral 7B: We introduce Mistral 7B v0.1, a 7-billion-parameter language model engineered for superior performance and efficiency. Mistral 7B outperforms Llama 2 13B across all evaluated benchmarks, and Llama 1 34B in reasoning, mathematics, and code generation. Our model leverages grouped-query attention (GQA) for faster inference, coupled with sliding window attention (SWA) to effectively handle sequences of arbitrary length with a reduced inference cost. We also provide a model fine-tuned to follow instructions, Mistral 7B -- Instruct, that surpasses the Llama 2 13B -- Chat model both on human and automated benchmarks. Our models are released under the Apache 2.0 license.) <|cite_end|> <|cite_start|> (Reference: {{GPT-4: Большие языковые модели (LLM) продемонстрировали замечательные возможности в понимании и генерации естественного языка в различных областях, включая медицину. В статье представлена оценка GPT-4 на основе двух точек зрения на проблему применения этой языковой модели: разработчиков из OpenAI, Microsoft и пользователей-медиков из двух европейских проектов. За последние несколько лет LLM, обученные на массивных междисциплинарных корпусах, стали мощными строительными блоками при создании систем, ориентированных на решение конкретных задач. В статье рассматривается три задачи: медицинское образование, работоспособность ChatGPT-4 в клинике (консультации, записи стенограмм беседы врача и пациента), и конкретные уровни точности диагностики (разные области медицины). Ответ на поставленный вопрос о необходимости медицинского GPT есть в мире, -он положительный.) <|cite_end|>. Recently, with the application of chain-of-thought (CoT)-like methods, LLMs have exhibited impressive performance across different reasoning tasks <|cite_start|> (Reference: Chain-of-Thought Prompting Elicits Reasoning in Large Language Models: We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.) <|cite_end|> <|cite_start|> (Reference: Self-Consistency Improves Chain of Thought Reasoning in Language Models: Chain-of-thought prompting combined with pre-trained large language models has achieved encouraging results on complex reasoning tasks. In this paper, we propose a new decoding strategy, self-consistency, to replace the naive greedy decoding used in chain-of-thought prompting. It first samples a diverse set of reasoning paths instead of only taking the greedy one, and then selects the most consistent answer by marginalizing out the sampled reasoning paths. Self-consistency leverages the intuition that a complex reasoning problem typically admits multiple different ways of thinking leading to its unique correct answer. Our extensive empirical evaluation shows that self-consistency boosts the performance of chain-of-thought prompting with a striking margin on a range of popular arithmetic and commonsense reasoning benchmarks, including GSM8K (+17.9%), SVAMP (+11.0%), AQuA (+12.2%), StrategyQA (+6.4%) and ARC-challenge (+3.9%).) <|cite_end|> <|cite_start|> (Reference: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models: Chain-of-thought prompting has demonstrated remarkable performance on various natural language reasoning tasks. However, it tends to perform poorly on tasks which requires solving problems harder than the exemplars shown in the prompts. To overcome this challenge of easy-to-hard generalization, we propose a novel prompting strategy, least-to-most prompting. The key idea in this strategy is to break down a complex problem into a series of simpler subproblems and then solve them in sequence. Solving each subproblem is facilitated by the answers to previously solved subproblems. Our experimental results on tasks related to symbolic manipulation, compositional generalization, and math reasoning reveal that least-to-most prompting is capable of generalizing to more difficult problems than those seen in the prompts. A notable finding is that when the GPT-3 code-davinci-002 model is used with least-to-most prompting, it can solve the compositional generalization benchmark SCAN in any split (including length split) with an accuracy of at least 99% using just 14 exemplars, compared to only 16% accuracy with chain-of-thought prompting. This is particularly noteworthy because neural-symbolic models in the literature that specialize in solving SCAN are trained on the entire training set containing over 15,000 examples. We have included prompts for all the tasks in the Appendix.) <|cite_end|>. However, despite the significant success of the CoT method, its underlying mechanisms still lack a comprehensive explanation. This raises a critical question for us: Does CoT truly reflect the reasoning process of the model? In other words, is the CoT a faithful explanation <|cite_start|> (Reference: Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?: With the growing popularity of deep-learning based NLP models, comes a need for interpretable systems. But what is interpretability, and what constitutes a high-quality interpretation? In this opinion piece we reflect on the current state of interpretability evaluation research. We call for more clearly differentiating between different desired criteria an interpretation should satisfy, and focus on the faithfulness criteria. We survey the literature with respect to faithfulness evaluation, and arrange the current approaches around three assumptions, providing an explicit form to how faithfulness is “defined” by the community. We provide concrete guidelines on how evaluation of interpretation methods should and should not be conducted. Finally, we claim that the current binary definition for faithfulness sets a potentially unrealistic bar for being considered faithful. We call for discarding the binary notion of faithfulness in favor of a more graded one, which we believe will be of greater practical utility.) <|cite_end|>?
To address the above question, a series of studies measuring and interpreting the CoT faithfulness has commenced, focusing primarily on capturing the causal relevance among the question context, CoT, and answer <|cite_start|> (Reference: Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting: Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM's process for solving a task. This level of transparency into LLMs' predictions would yield significant safety benefits. However, we find that CoT explanations can systematically misrepresent the true reason for a model's prediction. We demonstrate that CoT explanations can be heavily influenced by adding biasing features to model inputs--e.g., by reordering the multiple-choice options in a few-shot prompt to make the answer always "(A)"--which models systematically fail to mention in their explanations. When we bias models toward incorrect answers, they frequently generate CoT explanations rationalizing those answers. This causes accuracy to drop by as much as 36% on a suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI and Claude 1.0 from Anthropic. On a social-bias task, model explanations justify giving answers in line with stereotypes without mentioning the influence of these social biases. Our findings indicate that CoT explanations can be plausible yet misleading, which risks increasing our trust in LLMs without guaranteeing their safety. Building more transparent and explainable systems will require either improving CoT faithfulness through targeted efforts or abandoning CoT in favor of alternative methods.) <|cite_end|> <|cite_start|> (Reference: On Measuring Faithfulness or Self-consistency of Natural Language Explanations: Large language models (LLMs) can explain their predictions through post-hoc or Chain-of-Thought (CoT) explanations. But an LLM could make up reasonably sounding explanations that are unfaithful to its underlying reasoning. Recent work has designed tests that aim to judge the faithfulness of post-hoc or CoT explanations. In this work we argue that these faithfulness tests do not measure faithfulness to the models' inner workings -- but rather their self-consistency at output level. Our contributions are three-fold: i) We clarify the status of faithfulness tests in view of model explainability, characterising them as self-consistency tests instead. This assessment we underline by ii) constructing a Comparative Consistency Bank for self-consistency tests that for the first time compares existing tests on a common suite of 11 open LLMs and 5 tasks -- including iii) our new self-consistency measure CC-SHAP. CC-SHAP is a fine-grained measure (not a test) of LLM self-consistency. It compares how a model's input contributes to the predicted answer and to generating the explanation. Our fine-grained CC-SHAP metric allows us iii) to compare LLM behaviour when making predictions and to analyse the effect of other consistency tests at a deeper level, which takes us one step further towards measuring faithfulness by bringing us closer to the internals of the model than strictly surface output-oriented tests. Our code is available at \url{https://github.com/Heidelberg-NLP/CC-SHAP}) <|cite_end|>. These works can be mainly divided into two lines: Some works use the context intervention method. They add biasing features into the question context <|cite_start|> (Reference: Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting: Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM's process for solving a task. This level of transparency into LLMs' predictions would yield significant safety benefits. However, we find that CoT explanations can systematically misrepresent the true reason for a model's prediction. We demonstrate that CoT explanations can be heavily influenced by adding biasing features to model inputs--e.g., by reordering the multiple-choice options in a few-shot prompt to make the answer always "(A)"--which models systematically fail to mention in their explanations. When we bias models toward incorrect answers, they frequently generate CoT explanations rationalizing those answers. This causes accuracy to drop by as much as 36% on a suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI and Claude 1.0 from Anthropic. On a social-bias task, model explanations justify giving answers in line with stereotypes without mentioning the influence of these social biases. Our findings indicate that CoT explanations can be plausible yet misleading, which risks increasing our trust in LLMs without guaranteeing their safety. Building more transparent and explainable systems will require either improving CoT faithfulness through targeted efforts or abandoning CoT in favor of alternative methods.) <|cite_end|> or perform counterfactual edits in it <|cite_start|> (Reference: Faithfulness Tests for Natural Language Explanations: Explanations of neural models aim to reveal a model's decision-making process for its predictions. However, recent work shows that current methods giving explanations such as saliency maps or counterfactuals can be misleading, as they are prone to present reasons that are unfaithful to the model's inner workings. This work explores the challenging question of evaluating the faithfulness of natural language explanations (NLEs). To this end, we present two tests. First, we propose a counterfactual input editor for inserting reasons that lead to counterfactual predictions but are not reflected by the NLEs. Second, we reconstruct inputs from the reasons stated in the generated NLEs and check how often they lead to the same predictions. Our tests can evaluate emerging NLE models, proving a fundamental tool in the development of faithful NLEs.) <|cite_end|>. If the CoT does not include the reasons for changes in the answer, it is considered unfaithful, due to its lack of causal relevance to the answer. Other works introduce perturbations to the CoT itself and measure the answer changes. By calculating the coincidence rate of the answer <|cite_start|> (Reference: Measuring Faithfulness in Chain-of-Thought Reasoning: Large language models (LLMs) perform better when they produce step-by-step, "Chain-of-Thought" (CoT) reasoning before answering a question, but it is unclear if the stated reasoning is a faithful explanation of the model's actual reasoning (i.e., its process for answering the question). We investigate hypotheses for how CoT reasoning may be unfaithful, by examining how the model predictions change when we intervene on the CoT (e.g., by adding mistakes or paraphrasing it). Models show large variation across tasks in how strongly they condition on the CoT when predicting their answer, sometimes relying heavily on the CoT and other times primarily ignoring it. CoT's performance boost does not seem to come from CoT's added test-time compute alone or from information encoded via the particular phrasing of the CoT. As models become larger and more capable, they produce less faithful reasoning on most tasks we study. Overall, our results suggest that CoT can be faithful if the circumstances such as the model size and task are carefully chosen.) <|cite_end|> or the average treatment effect on the answer <|cite_start|> (Reference: Making Reasoning Matter: Measuring and Improving Faithfulness of Chain-of-Thought Reasoning: Large language models (LLMs) have been shown to perform better when asked to reason step-by-step before answering a question. However, it is unclear to what degree the model's final answer is faithful to the stated reasoning steps. In this paper, we perform a causal mediation analysis on twelve LLMs to examine how intermediate reasoning steps generated by the LLM influence the final outcome and find that LLMs do not reliably use their intermediate reasoning steps when generating an answer. To address this issue, we introduce FRODO, a framework to tailor small-sized LMs to generate correct reasoning steps and robustly reason over these steps. FRODO consists of an inference module that learns to generate correct reasoning steps using an implicit causal reward function and a reasoning module that learns to faithfully reason over these intermediate inferences using a counterfactual and causal preference objective. Our experiments show that FRODO significantly outperforms four competitive baselines. Furthermore, FRODO improves the robustness and generalization ability of the reasoning LM, yielding higher performance on out-of-distribution test sets. Finally, we find that FRODO's rationales are more faithful to its final answer predictions than standard supervised fine-tuning.) <|cite_end|>, they can capture the degree of causal relevance between CoTs and answers, which represents the faithfulness of the CoT.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.95\linewidth]{intro.pdf}
\caption{Two main limitations in previous work and the corresponding solutions provided by us. The arrows indicate the causality between two components, dashed lines represent weaker causality, question marks denote previous missing aspects, and c1...cn represents different CoT steps.}
\label{fig:intro}
\end{figure}
Though these works have made great progress, there exist two main limitations. Firstly, the granularity of these studies on CoT is relatively coarse. As shown in L1. of Figure \ref{fig:intro}, they treat the CoT as a whole when capturing its causality with other components (e.g. answers), overlooking that each step in it may play a different role. For example, the causal relevance between step cn and answers may be strong, but due to interference from other steps, this effect is masked when treating the CoT as a whole, which hampers our understanding of the model's reasoning process. Secondly, their analysis only considers marginal effects. Like L2. of Figure \ref{fig:intro}, when previous works corrupt the CoT to analyze the causal relevance to answers, they fix the context and omit its effect. However, during the auto-regressive process, changes in CoTs can also affect the answer's attention to the context, making it impossible to isolate the context's effect on the measurement. Hence, to minimize the interference from other variables, the interactions among all these three components should be considered jointly.
In this paper, we explore the CoT faithfulness problem based on the IG gradient attribution method, while addressing the above two shortcomings. Firstly, we use the integrated gradient (IG) <|cite_start|> (Reference: Axiomatic Attribution for Deep Networks: We study the problem of attributing the prediction of a deep network to its input features, a problem previously studied by several other works. We identify two fundamental axioms---Sensitivity and Implementation Invariance that attribution methods ought to satisfy. We show that they are not satisfied by most known attribution methods, which we consider to be a fundamental weakness of those methods. We use the axioms to guide the design of a new attribution method called Integrated Gradients. Our method requires no modification to the original network and is extremely simple to implement; it just needs a few calls to the standard gradient operator. We apply this method to a couple of image models, a couple of text models and a chemistry model, demonstrating its ability to debug networks, to extract rules from a network, and to enable users to engage with models better.) <|cite_end|> to measure the causal relevance between different CoT steps and the answer, finding two CoT reasoning paradigms: \textbf{centralized reasoning} and \textbf{distributed reasoning}. As shown in M1. of Figure \ref{fig:intro}, the former primarily utilizes the last step of the CoT for answering, while the latter uses information from multiple steps. We find that different modes of information interaction between the context and CoT form these two paradigms, and we demonstrate that the latter is prone to causing serious unfaithfulness issues.
Secondly, we further interpret the unfaithfulness issues in distributed reasoning. We compute the causal relevance among context, CoT and answer to jointly analyze the information interaction during LLM's CoT reasoning. Through it, we observe that CoT sometimes loses key contextual information, but the model recalls this information from the context when answering (see M2. in Figure \ref{fig:intro}). This inconsistency in information leads to unfaithful CoT issues.
Finally, to validate our former findings, we propose the \textbf{inferential bridging} method to mitigate the issue based on our interpretation of it. In this method, we take the attribution method to recall the correct information from the context and use it to enhance CoT generation, while filtering out the noisy CoT that has low semantic similarity to the question or low attribution scores with the context. We evaluate our methods on various reasoning benchmarks and conduct extensive experiments. The results not only prove our findings, but also indicate that our method is effective in addressing the unfaithful CoT issue (an increase of up to \textbf{8.8\%}). In summary, our key contributions are as follows:
(1) We delve into the CoT process to study faithfulness and find two distinct paradigms (centralized reasoning and distributed reasoning) in CoT reasoning. Through analyzing, we interpret them and prove the latter paradigm leads to serious unfaithful CoT issues.
(2) We jointly analyze the causal relevance among contexts, CoTs and answers to explain the reason behind unfaithfulness issues. Based on experimental results, we demonstrate that the reason is that LLMs recall correct information, which is lost in the CoT, from the context when predicting answers.
(3) We design a new method called inferential bridging, which effectively compensates for valid information and filters out invalid information for CoTs. Extensive experiments validate that it can successfully mitigate the unfaithful CoT reasoning issue.
Related Work
\subsection{Faithfulness in Chain-of-Thought Reasoning}
In the field of model interpretability, faithfulness, defined as ``accurately representing the reasoning process behind the model’s prediction'', is one of the important features for evaluating natural language explanations <|cite_start|> (Reference: Proceedings of the 22nd ACM Conference on Economics and Computation: ) <|cite_end|> <|cite_start|> (Reference: Explaining Explanations: An Overview of Interpretability of Machine Learning: There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.) <|cite_end|> <|cite_start|> (Reference: Towards Faithfully Interpretable NLP Systems: How Should We Define and Evaluate Faithfulness?: With the growing popularity of deep-learning based NLP models, comes a need for interpretable systems. But what is interpretability, and what constitutes a high-quality interpretation? In this opinion piece we reflect on the current state of interpretability evaluation research. We call for more clearly differentiating between different desired criteria an interpretation should satisfy, and focus on the faithfulness criteria. We survey the literature with respect to faithfulness evaluation, and arrange the current approaches around three assumptions, providing an explicit form to how faithfulness is “defined” by the community. We provide concrete guidelines on how evaluation of interpretation methods should and should not be conducted. Finally, we claim that the current binary definition for faithfulness sets a potentially unrealistic bar for being considered faithful. We call for discarding the binary notion of faithfulness in favor of a more graded one, which we believe will be of greater practical utility.) <|cite_end|>. With the emergence of CoT-like work, there has been increasing focus on measuring and analyzing this characteristic within CoTs <|cite_start|> (Reference: Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting: Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM's process for solving a task. This level of transparency into LLMs' predictions would yield significant safety benefits. However, we find that CoT explanations can systematically misrepresent the true reason for a model's prediction. We demonstrate that CoT explanations can be heavily influenced by adding biasing features to model inputs--e.g., by reordering the multiple-choice options in a few-shot prompt to make the answer always "(A)"--which models systematically fail to mention in their explanations. When we bias models toward incorrect answers, they frequently generate CoT explanations rationalizing those answers. This causes accuracy to drop by as much as 36% on a suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI and Claude 1.0 from Anthropic. On a social-bias task, model explanations justify giving answers in line with stereotypes without mentioning the influence of these social biases. Our findings indicate that CoT explanations can be plausible yet misleading, which risks increasing our trust in LLMs without guaranteeing their safety. Building more transparent and explainable systems will require either improving CoT faithfulness through targeted efforts or abandoning CoT in favor of alternative methods.) <|cite_end|> <|cite_start|> (Reference: Measuring Faithfulness in Chain-of-Thought Reasoning: Large language models (LLMs) perform better when they produce step-by-step, "Chain-of-Thought" (CoT) reasoning before answering a question, but it is unclear if the stated reasoning is a faithful explanation of the model's actual reasoning (i.e., its process for answering the question). We investigate hypotheses for how CoT reasoning may be unfaithful, by examining how the model predictions change when we intervene on the CoT (e.g., by adding mistakes or paraphrasing it). Models show large variation across tasks in how strongly they condition on the CoT when predicting their answer, sometimes relying heavily on the CoT and other times primarily ignoring it. CoT's performance boost does not seem to come from CoT's added test-time compute alone or from information encoded via the particular phrasing of the CoT. As models become larger and more capable, they produce less faithful reasoning on most tasks we study. Overall, our results suggest that CoT can be faithful if the circumstances such as the model size and task are carefully chosen.) <|cite_end|> <|cite_start|> (Reference: Faithful Chain-of-Thought Reasoning: While Chain-of-Thought (CoT) prompting boosts Language Models' (LM) performance on a gamut of complex reasoning tasks, the generated reasoning chain does not necessarily reflect how the model arrives at the answer (aka. faithfulness). We propose Faithful CoT, a reasoning framework involving two stages: Translation (Natural Language query $\rightarrow$ symbolic reasoning chain) and Problem Solving (reasoning chain $\rightarrow$ answer), using an LM and a deterministic solver respectively. This guarantees that the reasoning chain provides a faithful explanation of the final answer. Aside from interpretability, Faithful CoT also improves empirical performance: it outperforms standard CoT on 9 of 10 benchmarks from 4 diverse domains, with a relative accuracy gain of 6.3% on Math Word Problems (MWP), 3.4% on Planning, 5.5% on Multi-hop Question Answering (QA), and 21.4% on Relational Inference. Furthermore, with GPT-4 and Codex, it sets the new state-of-the-art few-shot performance on 7 datasets (with 95.0+ accuracy on 6 of them), showing a strong synergy between faithfulness and accuracy.) <|cite_end|>. Some studies introduce counterfactual perturbations to questions. If the answer changes without generating the corresponding explanations in the CoT, the model suffers unfaithfulness issues <|cite_start|> (Reference: Faithfulness Tests for Natural Language Explanations: Explanations of neural models aim to reveal a model's decision-making process for its predictions. However, recent work shows that current methods giving explanations such as saliency maps or counterfactuals can be misleading, as they are prone to present reasons that are unfaithful to the model's inner workings. This work explores the challenging question of evaluating the faithfulness of natural language explanations (NLEs). To this end, we present two tests. First, we propose a counterfactual input editor for inserting reasons that lead to counterfactual predictions but are not reflected by the NLEs. Second, we reconstruct inputs from the reasons stated in the generated NLEs and check how often they lead to the same predictions. Our tests can evaluate emerging NLE models, proving a fundamental tool in the development of faithful NLEs.) <|cite_end|> <|cite_start|> (Reference: Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting: Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM's process for solving a task. This level of transparency into LLMs' predictions would yield significant safety benefits. However, we find that CoT explanations can systematically misrepresent the true reason for a model's prediction. We demonstrate that CoT explanations can be heavily influenced by adding biasing features to model inputs--e.g., by reordering the multiple-choice options in a few-shot prompt to make the answer always "(A)"--which models systematically fail to mention in their explanations. When we bias models toward incorrect answers, they frequently generate CoT explanations rationalizing those answers. This causes accuracy to drop by as much as 36% on a suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI and Claude 1.0 from Anthropic. On a social-bias task, model explanations justify giving answers in line with stereotypes without mentioning the influence of these social biases. Our findings indicate that CoT explanations can be plausible yet misleading, which risks increasing our trust in LLMs without guaranteeing their safety. Building more transparent and explainable systems will require either improving CoT faithfulness through targeted efforts or abandoning CoT in favor of alternative methods.) <|cite_end|>. Some other works use causal median analysis on CoTs and answers, calculating the average treatment effect scores between the two to represent the faithfulness of the CoT <|cite_start|> (Reference: Making Reasoning Matter: Measuring and Improving Faithfulness of Chain-of-Thought Reasoning: Large language models (LLMs) have been shown to perform better when asked to reason step-by-step before answering a question. However, it is unclear to what degree the model's final answer is faithful to the stated reasoning steps. In this paper, we perform a causal mediation analysis on twelve LLMs to examine how intermediate reasoning steps generated by the LLM influence the final outcome and find that LLMs do not reliably use their intermediate reasoning steps when generating an answer. To address this issue, we introduce FRODO, a framework to tailor small-sized LMs to generate correct reasoning steps and robustly reason over these steps. FRODO consists of an inference module that learns to generate correct reasoning steps using an implicit causal reward function and a reasoning module that learns to faithfully reason over these intermediate inferences using a counterfactual and causal preference objective. Our experiments show that FRODO significantly outperforms four competitive baselines. Furthermore, FRODO improves the robustness and generalization ability of the reasoning LM, yielding higher performance on out-of-distribution test sets. Finally, we find that FRODO's rationales are more faithful to its final answer predictions than standard supervised fine-tuning.) <|cite_end|>. In this work, we utilize gradient attribution methods to measure faithfulness. This approach allows us to delve deeply into the process of CoT generation for analysis, which previous methods struggled to achieve.
\subsection{Chain-of-Thought-like Prompting}
After the CoT method achieves significant success, many studies attempt to elicit the model's inherent reasoning capabilities by designing various prompts.
Self-consistency prompting enhances reasoning by generating multiple reasoning paths and selecting the most consistent answer as the final prediction <|cite_start|> (Reference: Self-Consistency Improves Chain of Thought Reasoning in Language Models: Chain-of-thought prompting combined with pre-trained large language models has achieved encouraging results on complex reasoning tasks. In this paper, we propose a new decoding strategy, self-consistency, to replace the naive greedy decoding used in chain-of-thought prompting. It first samples a diverse set of reasoning paths instead of only taking the greedy one, and then selects the most consistent answer by marginalizing out the sampled reasoning paths. Self-consistency leverages the intuition that a complex reasoning problem typically admits multiple different ways of thinking leading to its unique correct answer. Our extensive empirical evaluation shows that self-consistency boosts the performance of chain-of-thought prompting with a striking margin on a range of popular arithmetic and commonsense reasoning benchmarks, including GSM8K (+17.9%), SVAMP (+11.0%), AQuA (+12.2%), StrategyQA (+6.4%) and ARC-challenge (+3.9%).) <|cite_end|>.
Least-to-most prompting structures the problem-solving process from the simplest components to the most complex, guiding models to build solutions incrementally <|cite_start|> (Reference: Least-to-Most Prompting Enables Complex Reasoning in Large Language Models: Chain-of-thought prompting has demonstrated remarkable performance on various natural language reasoning tasks. However, it tends to perform poorly on tasks which requires solving problems harder than the exemplars shown in the prompts. To overcome this challenge of easy-to-hard generalization, we propose a novel prompting strategy, least-to-most prompting. The key idea in this strategy is to break down a complex problem into a series of simpler subproblems and then solve them in sequence. Solving each subproblem is facilitated by the answers to previously solved subproblems. Our experimental results on tasks related to symbolic manipulation, compositional generalization, and math reasoning reveal that least-to-most prompting is capable of generalizing to more difficult problems than those seen in the prompts. A notable finding is that when the GPT-3 code-davinci-002 model is used with least-to-most prompting, it can solve the compositional generalization benchmark SCAN in any split (including length split) with an accuracy of at least 99% using just 14 exemplars, compared to only 16% accuracy with chain-of-thought prompting. This is particularly noteworthy because neural-symbolic models in the literature that specialize in solving SCAN are trained on the entire training set containing over 15,000 examples. We have included prompts for all the tasks in the Appendix.) <|cite_end|>.
Self-refine prompting utilizes refinement techniques where a model revisits and refines its previous reasoning paths iteratively <|cite_start|> (Reference: Self-Refine: Iterative Refinement with Self-Feedback: Like humans, large language models (LLMs) do not always generate the best output on their first try. Motivated by how humans refine their written text, we introduce Self-Refine, an approach for improving initial outputs from LLMs through iterative feedback and refinement. The main idea is to generate an initial output using an LLMs; then, the same LLMs provides feedback for its output and uses it to refine itself, iteratively. Self-Refine does not require any supervised training data, additional training, or reinforcement learning, and instead uses a single LLM as the generator, refiner, and feedback provider. We evaluate Self-Refine across 7 diverse tasks, ranging from dialog response generation to mathematical reasoning, using state-of-the-art (GPT-3.5, ChatGPT, and GPT-4) LLMs. Across all evaluated tasks, outputs generated with Self-Refine are preferred by humans and automatic metrics over those generated with the same LLM using conventional one-step generation, improving by ~20% absolute on average in task performance. Our work demonstrates that even state-of-the-art LLMs like GPT-4 can be further improved at test time using our simple, standalone approach.) <|cite_end|>.
Our work proposes a new CoT-like prompting technique called ``inferential bridging''. This approach can enhance the model's focus on the correct information in the context, thereby improving the faithfulness of the CoT, an aspect overlooked in previous works. <|paper_end|> | [
"<|reference_start|> {{GPT-4: Большие языковые модели (LLM) продемонстрировали замечательные возможности в понимании и генерации естественного языка в различных областях, включая медицину. В статье представлена оценка GPT-4 на основе двух точек зрения на проблему применения этой языковой модели: разработчиков из OpenAI, Microsoft и пользователей-медиков из двух европейских проектов. За последние несколько лет LLM, обученные на массивных междисциплинарных корпусах, стали мощными строительными блоками при создании систем, ориентированных на решение конкретных задач. В статье рассматривается три задачи: медицинское образование, работоспособность ChatGPT-4 в клинике (консультации, записи стенограмм беседы врача и пациента), и конкретные уровни точности диагностики (разные области медицины). Ответ на поставленный вопрос о необходимости медицинского GPT есть в мире, -он положительный. <|reference_end|>",
"<|reference_start|> On Measuring Faithfulness or Self-consistency of Natural Language Explanations: Large language models (LLMs) can explain their predictions through post-hoc or Chain-of-Thought (CoT) explanations. But an LLM could make up reasonably sounding explanations that are unfaithful to its underlying reasoning. Recent work has designed tests that aim to judge the faithfulness of post-hoc or CoT explanations. In this work we argue that these faithfulness tests do not measure faithfulness to the models' inner workings -- but rather their self-consistency at output level. Our contributions are three-fold: i) We clarify the status of faithfulness tests in view of model explainability, characterising them as self-consistency tests instead. This assessment we underline by ii) constructing a Comparative Consistency Bank for self-consistency tests that for the first time compares existing tests on a common suite of 11 open LLMs and 5 tasks -- including iii) our new self-consistency measure CC-SHAP. CC-SHAP is a fine-grained measure (not a test) of LLM self-consistency. It compares how a model's input contributes to the predicted answer and to generating the explanation. Our fine-grained CC-SHAP metric allows us iii) to compare LLM behaviour when making predictions and to analyse the effect of other consistency tests at a deeper level, which takes us one step further towards measuring faithfulness by bringing us closer to the internals of the model than strictly surface output-oriented tests. Our code is available at \\url{https://github.com/Heidelberg-NLP/CC-SHAP} <|reference_end|>",
"<|reference_start|> Language Models Don't Always Say What They Think: Unfaithful Explanations in Chain-of-Thought Prompting: Large Language Models (LLMs) can achieve strong performance on many tasks by producing step-by-step reasoning before giving a final output, often referred to as chain-of-thought reasoning (CoT). It is tempting to interpret these CoT explanations as the LLM's process for solving a task. This level of transparency into LLMs' predictions would yield significant safety benefits. However, we find that CoT explanations can systematically misrepresent the true reason for a model's prediction. We demonstrate that CoT explanations can be heavily influenced by adding biasing features to model inputs--e.g., by reordering the multiple-choice options in a few-shot prompt to make the answer always \"(A)\"--which models systematically fail to mention in their explanations. When we bias models toward incorrect answers, they frequently generate CoT explanations rationalizing those answers. This causes accuracy to drop by as much as 36% on a suite of 13 tasks from BIG-Bench Hard, when testing with GPT-3.5 from OpenAI and Claude 1.0 from Anthropic. On a social-bias task, model explanations justify giving answers in line with stereotypes without mentioning the influence of these social biases. Our findings indicate that CoT explanations can be plausible yet misleading, which risks increasing our trust in LLMs without guaranteeing their safety. Building more transparent and explainable systems will require either improving CoT faithfulness through targeted efforts or abandoning CoT in favor of alternative methods. <|reference_end|>",
"<|reference_start|> Faithful Chain-of-Thought Reasoning: While Chain-of-Thought (CoT) prompting boosts Language Models' (LM) performance on a gamut of complex reasoning tasks, the generated reasoning chain does not necessarily reflect how the model arrives at the answer (aka. faithfulness). We propose Faithful CoT, a reasoning framework involving two stages: Translation (Natural Language query $\\rightarrow$ symbolic reasoning chain) and Problem Solving (reasoning chain $\\rightarrow$ answer), using an LM and a deterministic solver respectively. This guarantees that the reasoning chain provides a faithful explanation of the final answer. Aside from interpretability, Faithful CoT also improves empirical performance: it outperforms standard CoT on 9 of 10 benchmarks from 4 diverse domains, with a relative accuracy gain of 6.3% on Math Word Problems (MWP), 3.4% on Planning, 5.5% on Multi-hop Question Answering (QA), and 21.4% on Relational Inference. Furthermore, with GPT-4 and Codex, it sets the new state-of-the-art few-shot performance on 7 datasets (with 95.0+ accuracy on 6 of them), showing a strong synergy between faithfulness and accuracy. <|reference_end|>"
] | [
2,
8,
17,
19
] | {"<|multi_cite_2_1|>": "arxiv-524224", "<|multi_cite_2_2|>": "arxiv-547654", "<|multi_cite_2_3|>": "ss-1343995", "<|multi_cite_1_1|>": "arxiv-395344", "<|multi_cite_1_2|>": "arxiv-407230", "<|multi_cite_1_3|>": "arxiv-421182", "<|cite_3|>": "ss-1352568", "<|multi_cite_4_1|>": "arxiv-502890", "<|multi_cite_4_3|>": "ss-1175658", "<|cite_5|>": "arxiv-502890", "<|cite_6|>": "arxiv-510175", "<|cite_7|>": "arxiv-526208", "<|multi_cite_8_2|>": "arxiv-587921", "<|cite_9|>": "arxiv-118182", "<|multi_cite_10_1|>": "ss-1218456", "<|multi_cite_10_2|>": "arxiv-160826", "<|multi_cite_10_3|>": "ss-1352568", "<|multi_cite_11_1|>": "arxiv-502890", "<|multi_cite_11_2|>": "arxiv-526208", "<|multi_cite_11_3|>": "arxiv-477929", "<|multi_cite_12_1|>": "arxiv-510175", "<|multi_cite_12_2|>": "arxiv-502890", "<|multi_cite_13_2|>": "arxiv-587921", "<|cite_14|>": "arxiv-407230", "<|cite_15|>": "arxiv-421182", "<|cite_16|>": "arxiv-493446"} |
1802.05662 | <|paper_start|> Title: List Heaps
Abstract: List Heaps: This paper presents a simple extension of the binary heap, the List Heap. We use List Heaps to demonstrate the idea of adaptive heaps: heaps whose performance is a function of both the size of the problem instance and the disorder of the problem instance. We focus on the presortedness of the input sequence as a measure of disorder for the problem instance. A number of practical applications that rely on heaps deal with input that is not random. Even random input contains presorted subsequences. Devising heaps that exploit this structure may provide a means for improving practical performance. We present some basic empirical tests to support this claim. Additionally, adaptive heaps may provide an interesting direction for theoretical investigation.
Introduction
\label{introduction}
A heap is a data structure which holds a finite set of items. Each item is associated with a key drawn from a totally ordered set. Heaps support the following operations:
\begin{table}[h]
\begin{tabular}{p{0.3\linewidth} p{0.6\linewidth}}
\textit{make heap (h):} & Create and return a new, empty heap $h$\\
\textit{insert (h, x, k):} & Insert item $x$ with key $k$ into heap $h$ and return a reference to where $x$ is stored in $h$\\
\textit{find min (h):} & Return a reference to where the item with the minimum key is stored in heap $h$\\
\textit{delete min (h):} & Delete the item with the minimum key from heap $h$ and return it\\
\textit{decrease key (h, x, k):} & Decrease the key of item $x$ in heap $h$ to $k$\\
\textit{delete (h, x):} & Delete item $x$ from heap $h$\\
\textit{meld ($h_{1}$, $h_{2}$):} & Return a heap formed by taking the union of heaps $h_{1}$ and $h_{2}$\\
\end{tabular}
\end{table}
The binary heap was introduced by Williams in 1964. Its simplicity and speed have made it and its generalization, the d-array heap, a popular choice in practice. It supports \textit{insert}, \textit{delete min}, and \textit{decrease key} in $O(\log{n})$ time. It can be used to sort $n$ items in $O(n\log{n})$, which matches the worst-case lower bound for a comparison sort. Vuillemin's introduction of the binomial queue in 1978 <|cite_start|> (Reference: A data structure for manipulating priority queues: A data structure is described which can be used for representing a collection of priority queues. The primitive operations are insertion, deletion, union, update, and search for an item of earliest priority.) <|cite_end|>, added \textit{meld} to the list of operations supported in $O(\log{n})$.
In 1984, Fibonacci heaps <|cite_start|> (Reference: Fibonacci Heaps and Their Uses in Improved Network
Optimization Algorithms: In this paper we develop a new data structure for implementing heaps (priority queues). Our structure, <italic>Fibonacci heaps</italic> (abbreviated <italic>F-heaps</italic>), extends the binomial queues proposed by Vuillemin and studied further by Brown. F-heaps support arbitrary deletion from an <italic>n</italic>-item heap in <italic>O</italic>(log <italic>n</italic>) amortized time and all other standard heap operations in <italic>O</italic>(1) amortized time. Using F-heaps we are able to obtain improved running times for several network optimization algorithms. In particular, we obtain the following worst-case bounds, where <italic>n</italic> is the number of vertices and <italic>m</italic> the number of edges in the problem graph:<list><item><italic>O</italic>(<italic>n</italic> log <italic>n</italic> + <italic>m</italic>) for the single-source shortest path problem with nonnegative edge lengths, improved from <italic>O</italic>(<italic>m</italic>log<subscrpt>(<italic>m/n</italic>+2)</subscrpt><italic>n</italic>);
</item><item><italic>O</italic>(<italic>n</italic><supscrpt>2</supscrpt>log <italic>n</italic> + <italic>nm</italic>) for the all-pairs shortest path problem, improved from <italic>O</italic>(<italic>nm</italic> log<subscrpt>(<italic>m/n</italic>+2)</subscrpt><italic>n</italic>);
</item><item><italic>O</italic>(<italic>n</italic><supscrpt>2</supscrpt>log <italic>n</italic> + <italic>nm</italic>) for the assignment problem (weighted bipartite matching), improved from <italic>O</italic>(<italic>nm</italic>log<subscrpt>(<italic>m/n</italic>+2)</subscrpt><italic>n</italic>);
</item><item><italic>O</italic>(<italic>mβ</italic>(<italic>m, n</italic>)) for the minimum spanning tree problem, improved from <italic>O</italic>(<italic>m</italic>log log<subscrpt>(<italic>m/n</italic>+2)</subscrpt><italic>n</italic>); where <italic>β</italic>(<italic>m, n</italic>) = min {<italic>i</italic> ↿ log<supscrpt>(<italic>i</italic>)</supscrpt><italic>n</italic> ≤ <italic>m/n</italic>}. Note that <italic>β</italic>(<italic>m, n</italic>) ≤ log<supscrpt>*</supscrpt><italic>n</italic> if <italic>m</italic> ≥ <italic>n</italic>.
</item></list>Of these results, the improved bound for minimum spanning trees is the most striking, although all the results give asymptotic improvements for graphs of appropriate densities.) <|cite_end|>, an extension of the binomial queue, achieved $O(1)$ amortized time for \textit{insert}, \textit{decrease key}, and \textit{meld}. The \textit{decrease key} result was particularly important in that it improved the worst-case bounds for a number of well-known graph algorithms. More recently, a few structures have achieved worst-case $O(1)$ time for \textit{decrease key} and \textit{meld}, see <|cite_start|> (Reference: Worst-case efficient priority queues: Au implementation of priority queues is presented that supports the operations MAKEQUEUE, FINDMIN, INSERT, MELD and DECREASEKEY in worst case time O(1) and DELETEMIN and DELETE in worst case time O(logn). The space requirement is linear. The data structure presented is the first achieving this worst case performance.) <|cite_end|> or. While this work has produced interesting and important theoretical results, it has failed to yield a structure that consistently outperforms the original binary heap and its variants in practice <|cite_start|> (Reference: A Back-to-Basics Empirical Study of Priority Queues: The theory community has proposed several new heap variants in the recent past which have remained largely untested experimentally. We take the field back to the drawing board, with straightforward implementations of both classic and novel structures using only standard, well-known optimizations. We study the behavior of each structure on a variety of inputs, including artificial workloads, workloads generated by running algorithms on real map data, and workloads from a discrete event simulator used in recent systems networking research. We provide observations about which characteristics are most correlated to performance. For example, we find that the L1 cache miss rate appears to be strongly correlated with wallclock time. We also provide observations about how the input sequence affects the relative performance of the different heap variants. For example, we show (both theoretically and in practice) that certain random insertion-deletion sequences are degenerate and can lead to misleading results. Overall, our findings suggest that while the conventional wisdom holds in some cases, it is sorely mistaken in others.) <|cite_end|>.
In this paper, we return to the binary heap and develop a simple extension, the List Heap. This straightforward extension can be given \textit{adaptive} operations: operations whose performance depends not only on the problem size, but also on the level of presortedness (disorder) in the problem instance. A bit of work has gone into developing the theory of adaptive sorting algorithms, see <|cite_start|> (Reference: A survey of adaptive sorting algorithms: The design and analysis of adaptive sorting algorithms has made important contributions to both theory and practice. The main contributions from the theoretical point of view are: the description of the complexity of a sorting algorithm not only in terms of the size of a problem instance but also in terms of the disorder of the given problem instance; the establishment of new relationships among measures of disorder; the introduction of new sorting algorithms that take advantage of the existing order in the input sequence; and, the proofs that several of the new sorting algorithms achieve maximal (optimal) adaptivity with respect to several measures of disorder. The main contributions from the practical point of view are: the demonstration that several algorithms currently in use are adaptive; and, the development of new algorithms, similar to currently used algorithms that perform competitively on random sequences and are significantly faster on nearly sorted sequences. In this survey, we present the basic notions and concepts of adaptive sorting and the state of the art of adaptive sorting algorithms.) <|cite_end|>, but to our knowledge, this work has not migrated into the related work on heap data structures, <|cite_start|> (Reference: A Survey on Priority Queues: ) <|cite_end|>. We believe that \textit{adaptive heaps} may provide an interesting angle for theoretical investigation. Additionally, they may provide a means of improving the empirical performance of current heap variants. The List Heap is a first step in this direction.
List Heaps support \textit{decrease key}, \textit{insert}, and \textit{delete min} in $O(\log{k})$, where $k$ is the number of lists in the List Heap. As we will show, the number of lists in a List Heap is a function of both the size of the problem instance and the disorder of the problem instance. We returned to the binary heap because of its simplicity and ubiquity, but this was not without costs. List Heaps lose the $O(1)$ \textit{insert}, \textit{decrease key}, and \textit{meld} of more sophisticated structures.
\subsection{Preliminaries} \label{preliminaries}
Here we present notational conventions and definitions used through the remainder of this paper. Let $X=\langle x_{1},...,x_{n}\rangle$ be a sequence of $n$ distinct elements $x_{i}$ from some totally ordered set. If $x_{1}<x_{2}<...<x_{n}$, $X$ is \textit{monotonically increasing} or just \textit{increasing}. If $x_{1}>x_{2}>...>x_{n}$, $X$ is \textit{monotonically decreasing} or just \textit{decreasing}. A sequence is $monotonic$ if it is either increasing or decreasing. The $head$ of a sequence $X$ is $x_{1}$, the $tail$ is $x_{n}$. If $A$ is a set, then $||A||$ is its cardinality. If $X$ is a sequence, then $|X|$ is its length. For two sequences $X=\langle x_{1},...,x_{n}\rangle$ and $Y=\langle y_{1},...,y_{m}\rangle$, their $concatenation$ $XY$ is the sequence $\langle x_{1},...,x_{n},y_{1},...,y_{m}\rangle$. If the sequence $X$ contains no elements, we write $X=\emptyset$.
A sequence obtained by deleting zero or more elements from $X$ is called a $subsequence$ of $X$. A subsequence $Y=\langle x_{i},...,x_{j}\rangle$ of $X$ is $consecutive$ if the indices $i,...,j$ are consecutive integers.
Let $Y = \langle x_{i},...,x_{j} \rangle$ and $Z = \langle x_{k},...,x_{l} \rangle$ be subsequences of $X$. The \textit{intersection} of $Y$ and $Z$, $Y \cap Z$, is the subsequence of $X$ obtained by deleting from $X$ all $x_{h}$ not in both $Y$ and $Z$ for $1 \leq h \leq n$. Similarly, the $union$ of $Y$ and $Z$, $Y \cup Z$, is the subsequence of $X$ obtained by deleting from $X$ all $x_{h}$ not in either $Y$ or $Z$ for $1 \leq h \leq n$. $Y$ and $Z$ are \textit{disjoint} if $Y \cap Z = \emptyset$. Let $P = \{X_{1},...,X_{k}\}$ be a set of disjoint subsequences of $X$, if the union of all subsequences in $P$ equals $X$, then $P$ is a \textit{partition} of $X$.
\subsection{Adaptive Sorting} \label{adaptive_sorting}
This section gives a very brief review of adaptive sorting. Heaps solve a generalized sorting problem, so adaptive sorting provides some intuition for why adaptive heaps might be useful. For a more detailed survey of adaptive sorting, see <|cite_start|> (Reference: A survey of adaptive sorting algorithms: The design and analysis of adaptive sorting algorithms has made important contributions to both theory and practice. The main contributions from the theoretical point of view are: the description of the complexity of a sorting algorithm not only in terms of the size of a problem instance but also in terms of the disorder of the given problem instance; the establishment of new relationships among measures of disorder; the introduction of new sorting algorithms that take advantage of the existing order in the input sequence; and, the proofs that several of the new sorting algorithms achieve maximal (optimal) adaptivity with respect to several measures of disorder. The main contributions from the practical point of view are: the demonstration that several algorithms currently in use are adaptive; and, the development of new algorithms, similar to currently used algorithms that perform competitively on random sequences and are significantly faster on nearly sorted sequences. In this survey, we present the basic notions and concepts of adaptive sorting and the state of the art of adaptive sorting algorithms.) <|cite_end|> or <|cite_start|> (Reference: A Framework for Adaptive Sorting: ) <|cite_end|>.
Consider the sorting problem: take as input some arbitrary sequence $X=\langle x_{1},...,x_{n}\rangle$ of elements from a totally ordered set and return a permutation of the sequence that is in increasing sorted order. Comparison based sorting has a well-know worst-case lower bound of $\Omega(n\log{n})$ <|cite_start|> (Reference: Introduction to Algorithms, Third Edition: If you had to buy just one text on algorithms, Introduction to Algorithms is a magnificent choice. The book begins by considering the mathematical foundations of the analysis of algorithms and maintains this mathematical rigor throughout the work. The tools developed in these opening sections are then applied to sorting, data structures, graphs, and a variety of selected algorithms including computational geometry, string algorithms, parallel models of computation, fast Fourier transforms (FFTs), and more. This book's strength lies in its encyclopedic range, clear exposition, and powerful analysis. Pseudo-code explanation of the algorithms coupled with proof of their accuracy makes this book is a great resource on the basic tools used to analyze the performance of algorithms.) <|cite_end|>. However, it is clear that this lower bound must not always hold. What if our input sequence is already sorted? What if only one element is out of place? What if it is the concatenation of two sorted subsequences? The lower bound can be refined if we account for the disorder in the input sequence.
The main achievements of the adaptive sorting literature are: proposing a variety of measures of disorder, proving new lower bounds with respect to these measures, developing sorting algorithms whose performance matches these new lower bounds, and developing a partial order on the set of measures.
We stop here and again direct the reader to <|cite_start|> (Reference: A survey of adaptive sorting algorithms: The design and analysis of adaptive sorting algorithms has made important contributions to both theory and practice. The main contributions from the theoretical point of view are: the description of the complexity of a sorting algorithm not only in terms of the size of a problem instance but also in terms of the disorder of the given problem instance; the establishment of new relationships among measures of disorder; the introduction of new sorting algorithms that take advantage of the existing order in the input sequence; and, the proofs that several of the new sorting algorithms achieve maximal (optimal) adaptivity with respect to several measures of disorder. The main contributions from the practical point of view are: the demonstration that several algorithms currently in use are adaptive; and, the development of new algorithms, similar to currently used algorithms that perform competitively on random sequences and are significantly faster on nearly sorted sequences. In this survey, we present the basic notions and concepts of adaptive sorting and the state of the art of adaptive sorting algorithms.) <|cite_end|> or <|cite_start|> (Reference: A Framework for Adaptive Sorting: ) <|cite_end|> for more information.
\subsection{Outline of Paper}
The remainder of the paper is organized as follows. Section \ref{Sec:Adaptive Heaps} discusses why adaptive heaps might be worth developing. Section \ref{list_heaps} presents List Heaps - their structure and operations. Section \ref{Empirical Results} presents the results of a series of brief empirical tests suggesting List Heaps may have promise in practice. Section \ref{Conclusion} summarizes results obtained. <|paper_end|> | [
"<|reference_start|> A survey of adaptive sorting algorithms: The design and analysis of adaptive sorting algorithms has made important contributions to both theory and practice. The main contributions from the theoretical point of view are: the description of the complexity of a sorting algorithm not only in terms of the size of a problem instance but also in terms of the disorder of the given problem instance; the establishment of new relationships among measures of disorder; the introduction of new sorting algorithms that take advantage of the existing order in the input sequence; and, the proofs that several of the new sorting algorithms achieve maximal (optimal) adaptivity with respect to several measures of disorder. The main contributions from the practical point of view are: the demonstration that several algorithms currently in use are adaptive; and, the development of new algorithms, similar to currently used algorithms that perform competitively on random sequences and are significantly faster on nearly sorted sequences. In this survey, we present the basic notions and concepts of adaptive sorting and the state of the art of adaptive sorting algorithms. <|reference_end|>",
"<|reference_start|> A Survey on Priority Queues: <|reference_end|>",
"<|reference_start|> A Framework for Adaptive Sorting: <|reference_end|>",
"<|reference_start|> A survey of adaptive sorting algorithms: The design and analysis of adaptive sorting algorithms has made important contributions to both theory and practice. The main contributions from the theoretical point of view are: the description of the complexity of a sorting algorithm not only in terms of the size of a problem instance but also in terms of the disorder of the given problem instance; the establishment of new relationships among measures of disorder; the introduction of new sorting algorithms that take advantage of the existing order in the input sequence; and, the proofs that several of the new sorting algorithms achieve maximal (optimal) adaptivity with respect to several measures of disorder. The main contributions from the practical point of view are: the demonstration that several algorithms currently in use are adaptive; and, the development of new algorithms, similar to currently used algorithms that perform competitively on random sequences and are significantly faster on nearly sorted sequences. In this survey, we present the basic notions and concepts of adaptive sorting and the state of the art of adaptive sorting algorithms. <|reference_end|>"
] | [
4,
5,
7,
9
] | {"<|cite_2|>": "ss-1976367", "<|cite_3|>": "ss-1838151", "<|cite_4|>": "ss-1700273", "<|cite_6|>": "arxiv-57584", "<|cite_7|>": "ss-998878", "<|cite_8|>": "ss-680832", "<|cite_9|>": "ss-998878", "<|cite_10|>": "ss-2552589", "<|cite_11|>": "ss-1287089", "<|cite_12|>": "ss-998878", "<|cite_13|>": "ss-2552589"} |
2306.04083 | <|paper_start|> Title: Coverage Path Planning with Budget Constraints for Multiple Unmanned Ground Vehicles
Abstract: Coverage Path Planning with Budget Constraints for Multiple Unmanned Ground Vehicles: This paper proposes a state-machine model for a multi-modal, multi-robot environmental sensing algorithm. This multi-modal algorithm integrates two different exploration algorithms: (1) coverage path planning using variable formations and (2) collaborative active sensing using multi-robot swarms. The state machine provides the logic for when to switch between these different sensing algorithms. We evaluate the performance of the proposed approach on a gas source localisation and mapping task. We use hardware-in-the-loop experiments and real-time experiments with a radio source simulating a real gas field. We compare the proposed approach with a single-mode, state-of-the-art collaborative active sensing approach. Our results indicate that our multi-modal switching approach can converge more rapidly than single-mode active sensing.
Introduction
Intelligent transportation is gaining popularity with an increase in the number of practical applications <|cite_start|> (Reference: A 3-d multi-object path planning method for electric vehicle considering the energy consumption and distance: The poor cruising range of electric vehicle (EV) is a problem preventing its popularity. To tackle this problem, methods such as battery technology, energy-based motion control technology are developed. This paper proposes a new solution from the perspective of path planning. Such a solution is called 3-D multi-object path planning method (3D-M method), in which both the energy consumption and distance are considered. The 3D-M method mainly realizes multi-object path planning by an energy consumption estimation model (ECEM) and a distance-integrated estimation model (DIEM). The ECEM can estimate the energy consumption between the neighbour position and the destination on the 3-D map, using a novel slope energy model considering energy consumption characteristic of the EV. The DIEM can estimate the integrated distance which includes the corresponding 2-D distance and 3-D distance, respectively. In the planning process, the outputs of ECEM and DIEM are combined to determine the cost of a path. In addition, a chaos-based multi-object optimizer (CBMOO) is used to search the optimal weights for the 3D-M method. The simulation experiments prove that the proposed method can generate an optimal path which saves much energy in comparison with the path provided by the distance-based method.) <|cite_end|>. This is particularly true for autonomous systems, such as unmanned ground vehicles (UGVs). UGVs have different applications, including coverage of a given area. In particular, when a number of UGVs/agents are employed to cover a given area, the control systems need to be intelligent to achieve the mission while overcoming the obstacles, both static as well as dynamic. The coverage path planning problem is a task wherein a UGV or UGVs, possessing a complete geometric description of the area of interest, generates an efficient coverage path to visit every point in a given area while avoiding all possible obstacles <|cite_start|> (Reference: A survey on coverage path planning for robotics: ) <|cite_end|>. Various technological developments and advancements in sensor technology, navigational, communication, and computational systems have facilitated the rapid growth in the use of coverage path planning (CPP) methods to assist UGVs in performing many specific applications, ranging from humanitarian missions such as surveillance, search and rescue tasks, to military operations such as surveillance <|cite_start|> (Reference: Adaptive Neural Network Control and Optimal Path Planning of UAV Surveillance System With Energy Consumption Prediction: A surveillance system is one of the most interesting research topics for an unmanned aerial vehicle (UAV). However, the problem of planning an energy-efficient path for the surveillance purpose while anticipating disturbances and predicting energy consumptions during the path tracking is still a challenging problem in recent years. The optimal path planning and the disturbance rejection control for a UAV surveillance system are investigated in this paper. A trained and tested energy consumption regression model is used to be the cost function of an optimal path planning scheme, which is designed from a clustered 3D real pilot flight pattern with the proposed K-agglomerative clustering method, and is processed via A-star and set-based particle-swarm-optimization (S-PSO) algorithm with adaptive weights. Moreover, an online adaptive neural network (ANN) controller with varied learning rates is designed to ensure the control stability while having a reliably fast disturbance rejection response. The effectiveness of the proposed framework is verified by numerical simulations and experimental results. By applying the proposed optimal path planning scheme, the energy consumption of the optimal path is only 72.3397 Wh while the average consumed energy of real pilot flight data is 96.593Wh. In addition, the proposed ANN control improves average root-mean-square error (RMSE) of horizontal and vertical tracking performance by 49.083% and 37.50% in comparison with a proportional-integral-differential (PID) control and a fuzzy control under the occurrence of external disturbances. According to all of the results, the combination of the proposed optimal path planning scheme and ANN controller can achieve an energy-efficient UAV surveillance systems with fast disturbance rejection response.) <|cite_end|>, environmental monitoring <|cite_start|> (Reference: A Distributed Control Framework for a Team of Unmanned Aerial Vehicles for Dynamic Wildfire Tracking: Wildland fire fighting is a very dangerous job, and the lack of information of the fire front is one of main reasons that causes many accidents. Using unmanned aerial vehicle (UAV) to cover wildfire is promising because it can replace human in hazardous fire tracking and save operation costs significantly. In this paper we propose a distributed control framework designed for a team of UAVs that can closely monitor a wildfire in open space, and precisely track its development. The UAV team, designed for flexible deployment, can effectively avoid in-flight collision as well as cooperate well with other neighbors. Experimental results are conducted to demonstrate the capabilites of the UAV team in covering a spreading wildfire.) <|cite_end|>, and civilian applications such as area cleaning, seeding or harvesting <|cite_start|> (Reference: Path Planning of Seeding Robot Based on Improved Ant Colony Algorithm: ) <|cite_end|>, and mapping and model reconstruction <|cite_start|> (Reference: Ground feature oriented path planning for unmanned aerial vehicle mapping: Unmanned aerial vehicles (UAVs) are being used to take roles that were previously performed by traditional manned aircraft, such as remote sensing and photogrammetry. The standard path planning for UAV mapping is mainly executed by adopting the “lawnmower” mode. However, some situations that have sparse or repetitive features are problematic to map with this technique, given that orthoimage stitching relies heavily on the number and quality of image tie points. Traditional path planning can result in some unregistered images due to a lack of tie points. This paper proposes a ground feature oriented path-planning method for UAV mapping. The method first estimates the distribution of the ground feature points from a lower-resolution image. Then, image footprints are selected by applying a three-step optimization. The flight path for the UAV is then generated by solving the “grouped traveling salesman” problem. This approach ensures the georegistration of images during orthoimage stitching while maximizing the orthoimage coverage. Two cases, including a simulation and a real-world case, together with standard path-planning modes with different overlaps, are selected to evaluate the proposed method. The results show that the proposed method covers the same area with the smallest number of images. The model excludes problematic areas from the scanning path to generate a more efficient processing dataset.) <|cite_end|>.
In recent years, the literature has discussed several approaches for coverage by a single vehicle <|cite_start|> (Reference: Deep reinforcement learning robot for search and rescue applications: Exploration in unknown cluttered environments: Rescue robots can be used in urban search and rescue (USAR) applications to perform the important task of exploring unknown cluttered environments. Due to the unpredictable nature of these environments, deep learning techniques can be used to perform these tasks. In this letter, we present the first use of deep learning to address the robot exploration task in USAR applications. In particular, we uniquely combine the traditional approach of frontier-based exploration with deep reinforcement learning to allow a robot to autonomously explore unknown cluttered environments. Experiments conducted with a mobile robot in unknown cluttered environments of varying sizes and layouts showed that the proposed exploration approach can effectively determine appropriate frontier locations to navigate to, while being robust to different environment layouts and sizes. Furthermore, a comparison study with other frontier exploration approaches showed that our learning-based frontier exploration technique was able to explore more of an environment earlier on, allowing for potential identification of a larger number of victims at the beginning of the time-critical exploration task.) <|cite_end|>. However, real-world factors such as battery capacity or sensor payload restrictions <|cite_start|> (Reference: A survey on multi-robot coverage path planning for model reconstruction and mapping: ) <|cite_end|> may limit the ability of a single agent to meet an operational time limit. Compared to single-vehicle CPP, a group of multiple vehicles may solve a coverage task more rapidly due to its larger footprint. Yet, to exploit the capacity of a multi-vehicle team, novel algorithms are required to determine the route for each vehicle when they can spread out and take on a narrow formation.
Motivated by the aforementioned observations, the contributions of the present paper are:
\begin{itemize}
\item A novel problem definition for non-backtracking coverage path planning with budget constraints assuming the use of a multi-UGV team in cluttered and uncertain environments.
\item A novel algorithm is presented for solving the problem, which utilizes a hierarchical block approach to decompose a given map into appropriate cell sizes. This allows us to exploit flexible multi-UGV formations to meet multiple budget constraints.
\item A distributed virtual leader-follower formation control strategy including automatic role assignment in the formation and obstacle avoidance.
\item A comprehensive comparative study in both simulated and real-world settings to confirm the viability of our approach. Our approach outperforms existing methods in terms of maximum coverage percentage, time to achieve coverage and computational complexity.
\end{itemize}
The virtual leader-follower control approach taken in this paper is based on the virtual spring system <|cite_start|> (Reference: Forced Variational Integrators for the Formation Control of Multiagent Systems: Formation control of autonomous agents can be seen as a physical system of individuals interacting with local potentials, and whose evolution can be described by a Lagrangian function. In this article, we construct and implement forced variational integrators for the formation control of autonomous agents modeled by double integrators. In particular, we provide an accurate numerical integrator with a lower computational cost than traditional solutions. We find error estimations for the rate of the energy dissipated along with the agents’ motion to achieve desired formations. Consequently, this permits to providing sufficient conditions on the time step for the convergence of discrete formation control systems such as the consensus problem in discrete systems. We present practical applications such as the rapid estimation of regions of attraction to desired shapes in distance-based formation control.) <|cite_end|>. The virtual leader-follower approach offers several advantages over the traditional leader-follower and swarm-based methods in coverage path planning. Firstly, it eliminates the need for a physical leader, which can be costly and risky to implement. Secondly, it allows for the efficient coordination of multiple followers without the risk of collisions or formation breakage, which is a common issue with swarm-based approaches. Thirdly, a virtual leader can easily adapt to changes in the environment, dynamically adjust the path plan, and provide more accurate and reliable instructions to the followers. This is in contrast to the real leader-follower system, which may not be able to respond quickly enough to changes in the environment <|cite_start|> (Reference: Hybrid adaptive negative imaginary- neural-fuzzy control with model identification for a quadrotor: ) <|cite_end|> <|cite_start|> (Reference: Multi-vehicle formation control and obstacle avoidance using negative-imaginary systems theory: ) <|cite_end|>. Finally, the virtual leader-follower approach offers more flexibility and scalability, enabling the coordination of a large number of followers without requiring additional resources.
The remainder of this paper is organised as follows. Section II discusses related work from the literature. Section III states our coverage path planning problem definition and describes our approach to solving this problem. Our approach has components for path planning and prediction of how long it will take to follow a path in formation. Section IV presents a series of experiments with each of these components, first in simulations then on real UGVs in an outdoor setting. We offer conclusions and directions for future work in Section V. <|paper_end|> | [
"<|reference_start|> A survey on coverage path planning for robotics: <|reference_end|>",
"<|reference_start|> A Distributed Control Framework for a Team of Unmanned Aerial Vehicles for Dynamic Wildfire Tracking: Wildland fire fighting is a very dangerous job, and the lack of information of the fire front is one of main reasons that causes many accidents. Using unmanned aerial vehicle (UAV) to cover wildfire is promising because it can replace human in hazardous fire tracking and save operation costs significantly. In this paper we propose a distributed control framework designed for a team of UAVs that can closely monitor a wildfire in open space, and precisely track its development. The UAV team, designed for flexible deployment, can effectively avoid in-flight collision as well as cooperate well with other neighbors. Experimental results are conducted to demonstrate the capabilites of the UAV team in covering a spreading wildfire. <|reference_end|>",
"<|reference_start|> Deep reinforcement learning robot for search and rescue applications: Exploration in unknown cluttered environments: Rescue robots can be used in urban search and rescue (USAR) applications to perform the important task of exploring unknown cluttered environments. Due to the unpredictable nature of these environments, deep learning techniques can be used to perform these tasks. In this letter, we present the first use of deep learning to address the robot exploration task in USAR applications. In particular, we uniquely combine the traditional approach of frontier-based exploration with deep reinforcement learning to allow a robot to autonomously explore unknown cluttered environments. Experiments conducted with a mobile robot in unknown cluttered environments of varying sizes and layouts showed that the proposed exploration approach can effectively determine appropriate frontier locations to navigate to, while being robust to different environment layouts and sizes. Furthermore, a comparison study with other frontier exploration approaches showed that our learning-based frontier exploration technique was able to explore more of an environment earlier on, allowing for potential identification of a larger number of victims at the beginning of the time-critical exploration task. <|reference_end|>",
"<|reference_start|> A survey on multi-robot coverage path planning for model reconstruction and mapping: <|reference_end|>"
] | [
1,
3,
6,
7
] | {"<|cite_1|>": "ss-729372", "<|cite_2|>": "ss-932072", "<|cite_3|>": "ss-729373", "<|cite_4|>": "arxiv-121228", "<|cite_5|>": "ss-729374", "<|cite_6|>": "ss-1481449", "<|cite_7|>": "ss-1521917", "<|cite_8|>": "ss-1209433", "<|cite_9|>": "ss-2495068", "<|multi_cite_10_1|>": "ss-2458581", "<|multi_cite_10_2|>": "ss-729375"} |
2303.13031-2 | <|cite_start|> (Reference: Hybrid Conditional Deep Inverse Tone Mapping: Emerging modern displays are capable to render ultra-high definition (UHD) media contents with high dynamic range (HDR) and wide color gamut (WCG). Although more and more native contents as such have been getting produced, the total amount is still in severe lack. Considering the massive amount of legacy contents with standard dynamic range (SDR) which may be exploitable, the urgent demand for proper conversion techniques thus springs up. In this paper, we try to tackle the conversion task from SDR to HDR-WCG for media contents and consumer displays. We propose a deep learning based SDR-to-HDR solution, Hybrid Conditional Deep Inverse Tone Mapping (HyCondITM), which is an end-to-end trainable framework including global transform, local adjustment, and detail refinement in a single unified pipeline. We present a hybrid condition network that can simultaneously extract both global and local priors for guidance to achieve scene-adaptive and spatially-variant manipulations. Experiments show that our method achieves state-of-the-art performance in both quantitative comparisons and visual quality, out-performing the previous methods.) <|cite_end|> <|cite_start|> (Reference: Bidirectional translation between uhd-hdr and hd-sdr videos: With the popularization of ultra high definition (UHD) high dynamic range (HDR) displays, recent works focus on upgrading high definition (HD) standard dynamic range (SDR) videos to UHD-HDR versions, aiming to provides richer details and higher contrasts on advanced modern displays. However, joint considering the upgrading & downgrading translations between two types of videos, which is practical in real applications, is generally neglected. On the one hand, downgrading translation is the key to showing UHD-HDR videos on HD-SDR displays. On the other hand, considering both translations enables joint optimization and results in high quality translation. To this end, we propose the bidirectional translation network (BiT-Net), which jointly considers two translations in one network for the first time. In brief, BiT-Net is elaborately designed in an invertible fashion that can be efficiently inferred along forward and backward directions for downgrading and upgrading tasks, respectively. Based on this framework, we divide each direction into three sub-tasks, i.e., decomposition, structure-guided translation, and synthesis, to effectively translate the dynamic range and the high-frequency details. Benefiting from the dedicated architecture, our BiT-Net can work on 1) downgrading UHD-HDR videos, 2) upgrading existing HD-SDR videos, and 3) synthesizing UHD-HDR versions from the downgraded HD-SDR videos. Experiments show that the proposed method achieves state-of-the-art performances on all these three tasks.) <|cite_end|>& <|cite_start|> (Reference: Learning an inverse tone mapping network with a generative adversarial regularizer: Transferring a low-dynamic-range (LDR) image to a high-dynamic-range (HDR) image, which is the so-called inverse tone mapping (iTM), is an important imaging technique to improve visual effects of imaging devices. In this paper, we propose a novel deep learning-based iTM method, which learns an inverse tone mapping network with a generative adversarial regularizer. In the framework of alternating optimization, we learn a U-Net-based HDR image generator to transfer input LDR images to HDR ones, and a simple CNN-based discriminator to classify the real HDR images and the generated ones. Specifically, when learning the generator we consider the content-related loss and the generative adversarial regularizer jointly to improve the stability and the robustness of the generated HDR images. Using the learned generator as the proposed inverse tone mapping network, we achieve superior iTM results to the state-of-the-art methods consistently.) <|cite_end|> <|cite_start|> (Reference: Gan Based Multi-Exposure Inverse Tone Mapping: High dynamic range (HDR) imaging provide larger range of luminosity and wider color gamut than conventional low dynamic range (LDR) imaging. The method which transforms LDR contents to HDR contents is called inverse tone mapping. After deep neural networks are used in inverse tone mapping problem, researchers mostly focus on transforming normal exposure LDR images to HDR. However, when people use inverse tone mapping in practice, they get some ill-exposed images as well. The state-of-art algorithms can’t transform these images to HDR well.In this work, we propose an end-to-end multi-exposure inverse tone mapping (MITM) framework based on existing generative adversarial network (GAN). This framework can transform a single LDR image not only at normal exposure, but also at unsuitable exposure to a normal exposure HDR image. We use histogram equalization to preprocess the luma of the input LDR images; when training the model, we use intrinsic image decomposition to divide the output HDR images into illuminance and reflectance components and use these two components to constrain the luminance information and the color information separately. This framework can adjust the unsuitable exposure and provide a better viewing experience than other state-of-art algorithms in the experimental results.) <|cite_end|> <|cite_start|> (Reference: Deep video inverse tone mapping: Inverse tone mapping is an important topic in High Dynamic Range technology. Recent years, deep learning based image inverse tone mapping methods have been extensively studied and perform better than classical inverse tone mapping methods. However, these methods consider the inverse tone mapping problem as a domain transformation problem from LDR domain directly to HDR domain and ignore the relationship between LDR and HDR. Besides, when using these deep learning based methods to transform frames of videos, it will lead to temporal inconsistency and flickering. In this work, we propose a new way to consider the inverse tone mapping problem and design a deep learning based video inverse tone mapping algorithm to reduce the flickering. Different from previous methods, we first transform LDR resources back to approximate real scenes and use these real scenes to generate the HDR outputs. When generating HDR outputs, we use 3D convolutional neural network to reduce the flickering. We also use methods to further constrain the luminance information and the color information of HDR outputs separately. Finally, we compare our results with existing classical video inverse tone mapping algorithms and deep image inverse tone mapping methods to show our great performance, and we also prove the necessity of each part of our method.) <|cite_end|> <|cite_start|> (Reference: Learning-based low-complexity reverse tone mapping with linear mapping: Although high dynamic range (HDR) display has become popular recently, the legacy content such as standard dynamic range (SDR) video is still in service and needs to be properly converted on HDR display devices. Therefore, it is desirable for HDR TV sets to have the capability of automatically converting input SDR video into HDR video, which is called reverse tone mapping (RTM). In this paper, we propose a novel learning-based low-complexity RTM scheme that not only expands the suppressed dynamic ranges (DR) of the SDR videos (or images), but also effectively restores lost detail in the SDR videos. Most existing conventional RTM schemes have focused on how to expand the DR of global contrast, resulting in limitations in recovering lost detail of SDR videos. On the other hand, the recent convolutional neural network-based approaches show promising results, but they are too complex to be applied on the users’ devices in practice. In this paper, our learning-based RTM scheme is computationally simple but effective in recovering lost detail. To learn the SDR-to-HDR relation, training “SDR-HDR” images are first separated into their base layer components and detail layer components by applying a guided filter. The detail layer components of the “SDR-HDR” pairs are used to train the SDR-to-HDR mapping. The mapping matrices are computed based on kernel ridge regression. In the meantime, the global contrast of the base layers is expanded by a nonlinear function that suppresses darker regions and amplifies brighter regions to fit the full DR of a target HDR display. To verify the effectiveness of our learning-based RTM scheme, we performed subjective quality assessment for images and videos. The experimental results show that our RTM scheme outperforms the existing RTM scheme with the successful restoration of lost detail in SDR images.) <|cite_end|> <|cite_start|> (Reference: SR-ITM-GAN: Learning 4K UHD HDR with a generative adversarial network: Currently, high dynamic range (HDR) videos with high resolution (HR) have become popular due to the display and the rendered technological advancements. However, making ultra-high definition (UHD) with HDR videos is expensive. The legacy low-resolution (LR) standard dynamic range (SDR) format is still largely used in practice. It is necessary to search for a solution to transform LR SDR videos into UHD HDR format. In this paper, we consider joint super resolution and learning inverse tone mapping an issue of high-frequency reconstruction and local contrast enhancement, and we propose an architecture based on a generative adversarial network to apply joint SR-ITM learning. Specifically, we include the residual ResNeXt block (RRXB) as a basic module to better capture high-frequency textures and adopt YUV interpolation to achieve local contrast enhancement. By adopting a generative adversarial network as a pivotal training mechanism, our designs show advantages in both integration and performance. Our code is now available on GitHub: SR-ITM-GAN.) <|cite_end|>& \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}} \textbf{\textit{2446a}} <|cite_start|> (Reference: An Inverse Tone Mapping Algorithm Based on Multi-scale Dual-branch Network: Inverse tone mapping has drawn more attention recently because it can convert a large number of existing standard dynamic range (SDR) images into high dynamic range (HDR) images. In this paper, we proposed an inverse tone mapping algorithm based on a multi-scale dual-branch network which can restore the original information lost in under-/over-exposed areas. A multi-scale structure and masking mechanism are used to guide the reconstruction of image texture and structure. In order to enhance the robustness of the model for dealing with extremely exposed images, we apply a preprocessing method of exposure adjustment which improves the quality of the generated images. With quantitative and visual inspection experiments, we prove that the proposed algorithm has better performance than most state-of-the-art algorithms.) <|cite_end|>\\ \etc <|cite_start|> (Reference: Distilling Style from Image Pairs for Global Forward and Inverse Tone Mapping: Many image enhancement or editing operations, such as forward and inverse tone mapping or color grading, do not have a unique solution, but instead a range of solutions, each representing a different style. Despite this, existing learning-based methods attempt to learn a unique mapping, disregarding this style. In this work, we show that information about the style can be distilled from collections of image pairs and encoded into a 2- or 3-dimensional vector. This gives us not only an efficient representation but also an interpretable latent space for editing the image style. We represent the global color mapping between a pair of images as a custom normalizing flow, conditioned on a polynomial basis of the pixel color. We show that such a network is more effective than PCA or VAE at encoding image style in low-dimensional space and lets us obtain an accuracy close to 40 dB, which is about 7-10 dB improvement over the state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Zoned Mapping Network from SDR Video to HDR Video: HDR video is popular for its brilliant colors, incredible brightness and rich details. Considering the strict production conditions of HDR video, most videos are currently saved in SDR format. Converting massive SDR videos into HDR format through technical methods can quickly fill the vacancy of HDR videos in the market and improve the utilization efficiency of film and television resources. In this paper, we propose a zoned mapping network to convert the video format. According to SDR OETF, the video frames are firstly segmented into three regions, highlight, medium and darkness. Then a tone mapping model is designed and respectively trained for the above three regions. Moreover, a detail enhancement model is proposed to form a composite inverse tone mapping network. Experiments show that our method can not only achieve good subjective visual quality, but also accomplish excellent results on various objective metrics, such as PSNR, SSIM, VIF, etc.) <|cite_end|> <|cite_start|> (Reference: Towards real-world HDRTV reconstruction: A data synthesis-based approach: Existing deep learning based HDRTV reconstruction methods assume one kind of tone mapping operators (TMOs) as the degradation procedure to synthesize SDRTV-HDRTV pairs for supervised training. In this paper, we argue that, although traditional TMOs exploit efficient dynamic range compression priors, they have several drawbacks on modeling the realistic degradation: information over-preservation, color bias and possible artifacts, making the trained reconstruction networks hard to generalize well to real-world cases. To solve this problem, we propose a learning-based data synthesis approach to learn the properties of real-world SDRTVs by integrating several tone mapping priors into both network structures and loss functions. In specific, we design a conditioned two-stream network with prior tone mapping results as a guidance to synthesize SDRTVs by both global and local transformations. To train the data synthesis network, we form a novel self-supervised content loss to constraint different aspects of the synthesized SDRTVs at regions with different brightness distributions and an adversarial loss to emphasize the details to be more realistic. To validate the effectiveness of our approach, we synthesize SDRTV-HDRTV pairs with our method and use them to train several HDRTV reconstruction networks. Then we collect two inference datasets containing both labeled and unlabeled real-world SDRTVs, respectively. Experimental results demonstrate that, the networks trained with our synthesized data generalize significantly better to these two real-world datasets than existing solutions.) <|cite_end|>\end{tabular}} \\ \cline{1-3}
Dataset & KAIST \& HDRTV1K & Zeng20 & \\ \hline
\end{tabular}
\caption{Current HDRTV-to-SDR degradation models (DMs). `Dataset' means SDR there is degraded from HDR using that DM.}
\label{tab:hdrtv_dm}
\end{table}
\textit{\textbf{Youtube}} stands for the default conversion YouTube applied to BT.2020/PQ1000 HDR content to produce its SDR-applicable version, \textit{\textbf{Reinhard}}/\textit{\textbf{2446a}} means tone-mapping HDR to SDR using \textit{Reinhard TMO} <|cite_start|> (Reference: {Photographic tone reproduction for digital images: A classic photographic task is the mapping of the potentially high dynamic range of real world luminances to the low dynamic range of the photographic print. This tone reproduction problem is also faced by computer graphics practitioners who map digital images to a low dynamic range print or screen. The work presented in this paper leverages the time-tested techniques of photographic practice to develop a new tone reproduction operator. In particular, we use and extend the techniques developed by Ansel Adams to deal with digital images. The resulting algorithm is simple and produces good results for a wide variety of images.) <|cite_end|>/BT.2446\textit{Method A}. <|cite_start|> (Reference: Distilling Style from Image Pairs for Global Forward and Inverse Tone Mapping: Many image enhancement or editing operations, such as forward and inverse tone mapping or color grading, do not have a unique solution, but instead a range of solutions, each representing a different style. Despite this, existing learning-based methods attempt to learn a unique mapping, disregarding this style. In this work, we show that information about the style can be distilled from collections of image pairs and encoded into a 2- or 3-dimensional vector. This gives us not only an efficient representation but also an interpretable latent space for editing the image style. We represent the global color mapping between a pair of images as a custom normalizing flow, conditioned on a polynomial basis of the pixel color. We show that such a network is more effective than PCA or VAE at encoding image style in low-dimensional space and lets us obtain an accuracy close to 40 dB, which is about 7-10 dB improvement over the state-of-the-art methods.) <|cite_end|>/ <|cite_start|> (Reference: Zoned Mapping Network from SDR Video to HDR Video: HDR video is popular for its brilliant colors, incredible brightness and rich details. Considering the strict production conditions of HDR video, most videos are currently saved in SDR format. Converting massive SDR videos into HDR format through technical methods can quickly fill the vacancy of HDR videos in the market and improve the utilization efficiency of film and television resources. In this paper, we propose a zoned mapping network to convert the video format. According to SDR OETF, the video frames are firstly segmented into three regions, highlight, medium and darkness. Then a tone mapping model is designed and respectively trained for the above three regions. Moreover, a detail enhancement model is proposed to form a composite inverse tone mapping network. Experiments show that our method can not only achieve good subjective visual quality, but also accomplish excellent results on various objective metrics, such as PSNR, SSIM, VIF, etc.) <|cite_end|>/ <|cite_start|> (Reference: Towards real-world HDRTV reconstruction: A data synthesis-based approach: Existing deep learning based HDRTV reconstruction methods assume one kind of tone mapping operators (TMOs) as the degradation procedure to synthesize SDRTV-HDRTV pairs for supervised training. In this paper, we argue that, although traditional TMOs exploit efficient dynamic range compression priors, they have several drawbacks on modeling the realistic degradation: information over-preservation, color bias and possible artifacts, making the trained reconstruction networks hard to generalize well to real-world cases. To solve this problem, we propose a learning-based data synthesis approach to learn the properties of real-world SDRTVs by integrating several tone mapping priors into both network structures and loss functions. In specific, we design a conditioned two-stream network with prior tone mapping results as a guidance to synthesize SDRTVs by both global and local transformations. To train the data synthesis network, we form a novel self-supervised content loss to constraint different aspects of the synthesized SDRTVs at regions with different brightness distributions and an adversarial loss to emphasize the details to be more realistic. To validate the effectiveness of our approach, we synthesize SDRTV-HDRTV pairs with our method and use them to train several HDRTV reconstruction networks. Then we collect two inference datasets containing both labeled and unlabeled real-world SDRTVs, respectively. Experimental results demonstrate that, the networks trained with our synthesized data generalize significantly better to these two real-world datasets than existing solutions.) <|cite_end|>respectively degrade HDR to SDR by grading/\textit{Habel TMO}/another learned network.
In other restoration tasks <|cite_start|> (Reference: Designing a Practical Degradation Model for Deep Blind Image Super-Resolution: It is widely acknowledged that single image super-resolution (SISR) methods would not perform well if the assumed degradation model deviates from those in real images. Although several degradation models take additional factors into consideration, such as blur, they are still not effective enough to cover the diverse degradations of real images. To address this issue, this paper proposes to design a more complex but practical degradation model that consists of randomly shuffled blur, downsampling and noise degradations. Specifically, the blur is approximated by two convolutions with isotropic and anisotropic Gaussian kernels; the downsampling is randomly chosen from nearest, bilinear and bicubic interpolations; the noise is synthesized by adding Gaussian noise with different noise levels, adopting JPEG compression with different quality factors, and generating processed camera sensor noise via reverse-forward camera image signal processing (ISP) pipeline model and RAW image noise model. To verify the effectiveness of the new degradation model, we have trained a deep blind ESRGAN super-resolver and then applied it to super-resolve both synthetic and real images with diverse degradations. The experimental results demonstrate that the new degradation model can help to significantly improve the practicability of deep super-resolvers, thus providing a powerful alternative solution for real SISR applications.) <|cite_end|> <|cite_start|> (Reference: Self-supervision versus synthetic datasets: which is the lesser evil in the context of video denoising?: Supervised training has led to state-of-the-art results in image and video denoising. However, its application to real data is limited since it requires large datasets of noisy-clean pairs that are difficult to obtain. For this reason, networks are often trained on realistic synthetic data. More recently, some self-supervised frameworks have been proposed for training such denoising networks directly on the noisy data without requiring ground truth. On synthetic denoising problems supervised training outperforms self-supervised approaches, however in recent years the gap has become narrower, especially for video. In this paper, we propose a study aiming to determine which is the best approach to train denoising networks for real raw videos: supervision on synthetic realistic data or self-supervision on real data. A complete study with quantitative results in case of natural videos with real motion is impossible since no dataset with clean-noisy pairs exists. We address this issue by considering three independent experiments in which we compare the two frameworks. We found that self-supervision on the real data outperforms supervision on synthetic data, and that in normal illumination conditions the drop in performance is due to the synthetic ground truth generation, not the noise model.) <|cite_end|> <|cite_start|> (Reference: Towards Flexible Blind JPEG Artifacts Removal: Training a single deep blind model to handle different quality factors for JPEG image artifacts removal has been attracting considerable attention due to its convenience for practical usage. However, existing deep blind methods usually directly reconstruct the image without predicting the quality factor, thus lacking the flexibility to control the output as the non-blind methods. To remedy this problem, in this paper, we propose a flexible blind convolutional neural network, namely FBCNN, that can predict the adjustable quality factor to control the trade-off between artifacts removal and details preservation. Specifically, FBCNN decouples the quality factor from the JPEG image via a decoupler module and then embeds the predicted quality factor into the subsequent reconstructor module through a quality factor attention block for flexible control. Besides, we find existing methods are prone to fail on non-aligned double JPEG images even with only a one-pixel shift, and we thus propose a double JPEG degradation model to augment the training data. Extensive experiments on single JPEG images, more general double JPEG images, and real-world JPEG images demonstrate that our proposed FBCNN achieves favorable performance against state-of-the-art methods in terms of both quantitative metrics and visual quality.) <|cite_end|> <|cite_start|> (Reference: Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement: Low-light image enhancement aims to improve an image's visibility while keeping its visual naturalness. Different from existing methods tending to accomplish the relighting task directly by ignoring the fidelity and naturalness recovery, we investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps. Inspired by the color image formulation (diffuse illumination color plus environment illumination color), we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color. To this end, we propose a novel Degradation-to-Refinement Generation Network (DRGN). Its distinctive features can be summarized as 1) A novel two-step generation network for degradation learning and content refinement. It is not only superior to one-step methods, but also capable of synthesizing sufficient paired samples to benefit the model training; 2) A multi-resolution fusion network to represent the target information (degradation or contents) in a multi-scale cooperative manner, which is more effective to address the complex unmixing problems. Extensive experiments on both the enhancement task and joint detection task have verified the effectiveness and efficiency of our proposed method, surpassing the SOTA by \textit{0.70dB on average and 3.18\% in mAP}, respectively. The code is available at \url{https://github.com/kuijiang0802/DRGN}.) <|cite_end|> <|cite_start|> (Reference: Modular Degradation Simulation and Restoration for Under-Display Camera: Under-display camera (UDC) provides an elegant solution for full-screen smartphones. However, UDC captured images suffer from severe degradation since sensors lie under the display. Although this issue can be tackled by image restoration networks, these networks require large-scale image pairs for training. To this end, we propose a modular network dubbed MPGNet trained using the generative adversarial network (GAN) framework for simulating UDC imaging. Specifically, we note that the UDC imaging degradation process contains brightness attenuation, blurring, and noise corruption. Thus we model each degradation with a characteristic-related modular network, and all modular networks are cascaded to form the generator. Together with a pixel-wise discriminator and supervised loss, we can train the generator to simulate the UDC imaging degradation process. Furthermore, we present a Transformer-style network named DWFormer for UDC image restoration. For practical purposes, we use depth-wise convolution instead of the multi-head self-attention to aggregate local spatial information. Moreover, we propose a novel channel attention module to aggregate global information, which is critical for brightness recovery. We conduct evaluations on the UDC benchmark, and our method surpasses the previous state-of-the-art models by 1.23 dB on the P-OLED track and 0.71 dB on the T-OLED track, respectively.) <|cite_end|> <|cite_start|> (Reference: LHDR: HDR Reconstruction for Legacy Content using a Lightweight DNN: High dynamic range (HDR) image is widely-used in graphics and photography due to the rich information it contains. Recently the community has started using deep neural network (DNN) to reconstruct standard dynamic range (SDR) images into HDR. Albeit the superiority of current DNN-based methods, their application scenario is still limited: (1) heavy model impedes real-time processing, and (2) inapplicable to legacy SDR content with more degradation types. Therefore, we propose a lightweight DNN-based method trained to tackle legacy SDR. For better design, we reform the problem modeling and emphasize degradation model. Experiments show that our method reached appealing performance with minimal computational cost compared with others.) <|cite_end|>, DMs are designed to have proper extent and diversity of degradation so network can learn appropriate restore capability and good generalization.
Accordingly, we argue that current DMs are not favorable for training.
Specifically, the motive of \textit{\textbf{YouTube}} is to synthesize HDR view for user possessing only SDR display, it tends to enhance/increase the brightness and saturation so network will, vise-versa, learn to deteriorate/decline them.
Also, tone-mapping \eg \textit{\textbf{Reinhard}} and \textit{\textbf{2446a}} dedicate to preserve as much information from HDR, they have monotonically increasing mapping curves (Fig.\ref{fig:dm_curve}) without highlight clipping, therefore the trained network is not likely to recover much information in over-exposed areas. Above observations are later proven in Tab.\ref{tab:sdr_stat}, Fig.\ref{fig:teaser}\&\ref{fig:result} \etc. <|paper_end|> | [
"<|reference_start|> Bidirectional translation between uhd-hdr and hd-sdr videos: With the popularization of ultra high definition (UHD) high dynamic range (HDR) displays, recent works focus on upgrading high definition (HD) standard dynamic range (SDR) videos to UHD-HDR versions, aiming to provides richer details and higher contrasts on advanced modern displays. However, joint considering the upgrading & downgrading translations between two types of videos, which is practical in real applications, is generally neglected. On the one hand, downgrading translation is the key to showing UHD-HDR videos on HD-SDR displays. On the other hand, considering both translations enables joint optimization and results in high quality translation. To this end, we propose the bidirectional translation network (BiT-Net), which jointly considers two translations in one network for the first time. In brief, BiT-Net is elaborately designed in an invertible fashion that can be efficiently inferred along forward and backward directions for downgrading and upgrading tasks, respectively. Based on this framework, we divide each direction into three sub-tasks, i.e., decomposition, structure-guided translation, and synthesis, to effectively translate the dynamic range and the high-frequency details. Benefiting from the dedicated architecture, our BiT-Net can work on 1) downgrading UHD-HDR videos, 2) upgrading existing HD-SDR videos, and 3) synthesizing UHD-HDR versions from the downgraded HD-SDR videos. Experiments show that the proposed method achieves state-of-the-art performances on all these three tasks. <|reference_end|>",
"<|reference_start|> SR-ITM-GAN: Learning 4K UHD HDR with a generative adversarial network: Currently, high dynamic range (HDR) videos with high resolution (HR) have become popular due to the display and the rendered technological advancements. However, making ultra-high definition (UHD) with HDR videos is expensive. The legacy low-resolution (LR) standard dynamic range (SDR) format is still largely used in practice. It is necessary to search for a solution to transform LR SDR videos into UHD HDR format. In this paper, we consider joint super resolution and learning inverse tone mapping an issue of high-frequency reconstruction and local contrast enhancement, and we propose an architecture based on a generative adversarial network to apply joint SR-ITM learning. Specifically, we include the residual ResNeXt block (RRXB) as a basic module to better capture high-frequency textures and adopt YUV interpolation to achieve local contrast enhancement. By adopting a generative adversarial network as a pivotal training mechanism, our designs show advantages in both integration and performance. Our code is now available on GitHub: SR-ITM-GAN. <|reference_end|>",
"<|reference_start|> Towards real-world HDRTV reconstruction: A data synthesis-based approach: Existing deep learning based HDRTV reconstruction methods assume one kind of tone mapping operators (TMOs) as the degradation procedure to synthesize SDRTV-HDRTV pairs for supervised training. In this paper, we argue that, although traditional TMOs exploit efficient dynamic range compression priors, they have several drawbacks on modeling the realistic degradation: information over-preservation, color bias and possible artifacts, making the trained reconstruction networks hard to generalize well to real-world cases. To solve this problem, we propose a learning-based data synthesis approach to learn the properties of real-world SDRTVs by integrating several tone mapping priors into both network structures and loss functions. In specific, we design a conditioned two-stream network with prior tone mapping results as a guidance to synthesize SDRTVs by both global and local transformations. To train the data synthesis network, we form a novel self-supervised content loss to constraint different aspects of the synthesized SDRTVs at regions with different brightness distributions and an adversarial loss to emphasize the details to be more realistic. To validate the effectiveness of our approach, we synthesize SDRTV-HDRTV pairs with our method and use them to train several HDRTV reconstruction networks. Then we collect two inference datasets containing both labeled and unlabeled real-world SDRTVs, respectively. Experimental results demonstrate that, the networks trained with our synthesized data generalize significantly better to these two real-world datasets than existing solutions. <|reference_end|>",
"<|reference_start|> Towards real-world HDRTV reconstruction: A data synthesis-based approach: Existing deep learning based HDRTV reconstruction methods assume one kind of tone mapping operators (TMOs) as the degradation procedure to synthesize SDRTV-HDRTV pairs for supervised training. In this paper, we argue that, although traditional TMOs exploit efficient dynamic range compression priors, they have several drawbacks on modeling the realistic degradation: information over-preservation, color bias and possible artifacts, making the trained reconstruction networks hard to generalize well to real-world cases. To solve this problem, we propose a learning-based data synthesis approach to learn the properties of real-world SDRTVs by integrating several tone mapping priors into both network structures and loss functions. In specific, we design a conditioned two-stream network with prior tone mapping results as a guidance to synthesize SDRTVs by both global and local transformations. To train the data synthesis network, we form a novel self-supervised content loss to constraint different aspects of the synthesized SDRTVs at regions with different brightness distributions and an adversarial loss to emphasize the details to be more realistic. To validate the effectiveness of our approach, we synthesize SDRTV-HDRTV pairs with our method and use them to train several HDRTV reconstruction networks. Then we collect two inference datasets containing both labeled and unlabeled real-world SDRTVs, respectively. Experimental results demonstrate that, the networks trained with our synthesized data generalize significantly better to these two real-world datasets than existing solutions. <|reference_end|>"
] | [
1,
6,
10,
14
] | {"<|multi_cite_3_1|>": "arxiv-201429", "<|multi_cite_3_2|>": "arxiv-222871", "<|multi_cite_3_3|>": "ss-936651", "<|multi_cite_3_4|>": "arxiv-361520", "<|multi_cite_3_5|>": "ss-2477881", "<|multi_cite_3_6|>": "ss-683633", "<|multi_cite_5_1|>": "arxiv-329923", "<|multi_cite_5_2|>": "arxiv-415248", "<|multi_cite_5_3|>": "arxiv-370351", "<|multi_cite_5_4|>": "arxiv-328437", "<|multi_cite_5_5|>": "arxiv-448318", "<|multi_cite_5_6|>": "arxiv-463661", "<|cite_6|>": "arxiv-381946", "<|multi_cite_7_1|>": "arxiv-361847", "<|multi_cite_7_2|>": "ss-925813", "<|cite_8|>": "arxiv-375537", "<|multi_cite_9_1|>": "ss-769695", "<|multi_cite_9_2|>": "ss-860451", "<|multi_cite_11_1|>": "arxiv-128900", "<|multi_cite_11_2|>": "arxiv-154713", "<|multi_cite_11_3|>": "ss-1242277", "<|multi_cite_11_4|>": "arxiv-383503", "<|multi_cite_12_1|>": "ss-1245943", "<|multi_cite_12_2|>": "arxiv-141112", "<|multi_cite_12_3|>": "arxiv-201159", "<|multi_cite_12_4|>": "arxiv-330358", "<|multi_cite_12_5|>": "arxiv-276169", "<|multi_cite_12_6|>": "arxiv-422207", "<|multi_cite_13_1|>": "arxiv-137757", "<|multi_cite_13_2|>": "arxiv-150697", "<|multi_cite_13_3|>": "arxiv-257127", "<|multi_cite_13_4|>": "arxiv-265587", "<|multi_cite_13_5|>": "arxiv-343595", "<|multi_cite_14_1|>": "arxiv-201429", "<|multi_cite_14_2|>": "arxiv-222871", "<|multi_cite_14_3|>": "ss-936651", "<|multi_cite_14_4|>": "arxiv-361520", "<|multi_cite_14_5|>": "ss-2477881", "<|multi_cite_14_6|>": "ss-683633", "<|multi_cite_15_1|>": "ss-2557066", "<|multi_cite_15_2|>": "ss-936652", "<|multi_cite_15_3|>": "ss-936653", "<|multi_cite_15_4|>": "ss-912046", "<|multi_cite_15_5|>": "ss-936654", "<|multi_cite_15_6|>": "ss-936655", "<|multi_cite_15_7|>": "ss-936656", "<|multi_cite_15_8|>": "ss-936657", "<|multi_cite_15_9|>": "arxiv-432308", "<|multi_cite_15_10|>": "arxiv-430920", "<|multi_cite_15_11|>": "arxiv-440090", "<|multi_cite_15_12|>": "ss-936658", "<|multi_cite_15_13|>": "ss-683632", "<|multi_cite_15_14|>": "arxiv-449953", "<|multi_cite_15_15|>": "ss-1476786", "<|multi_cite_15_16|>": "ss-936659", "<|cite_16|>": "arxiv-361847", "<|cite_17|>": "arxiv-361520", "<|cite_18|>": "arxiv-361520", "<|cite_19|>": "ss-936655", "<|cite_20|>": "arxiv-449953", "<|cite_21|>": "arxiv-361520", "<|multi_cite_22_1|>": "arxiv-430920", "<|multi_cite_22_2|>": "ss-936658", "<|multi_cite_23_1|>": "arxiv-440090", "<|multi_cite_23_2|>": "ss-683632", "<|multi_cite_23_3|>": "ss-936658", "<|cite_24|>": "ss-683633", "<|cite_25|>": "ss-683632", "<|cite_26|>": "ss-936654", "<|cite_27|>": "ss-2477881", "<|multi_cite_28_1|>": "ss-936656", "<|multi_cite_28_2|>": "ss-936658", "<|multi_cite_29_1|>": "arxiv-201429", "<|multi_cite_29_2|>": "arxiv-222871", "<|multi_cite_29_3|>": "ss-936651", "<|multi_cite_29_4|>": "arxiv-432308", "<|multi_cite_29_5|>": "arxiv-440090", "<|multi_cite_29_6|>": "ss-1476786", "<|cite_30|>": "ss-936659", "<|multi_cite_31_1|>": "ss-936655", "<|multi_cite_31_2|>": "ss-936657", "<|cite_32|>": "ss-912046", "<|cite_36|>": "arxiv-201429", "<|multi_cite_37_1|>": "arxiv-222871", "<|multi_cite_37_2|>": "arxiv-440090", "<|multi_cite_37_3|>": "ss-1476786", "<|cite_38|>": "ss-936651", "<|cite_39|>": "arxiv-361520", "<|multi_cite_40_1|>": "arxiv-430920", "<|multi_cite_40_2|>": "ss-683633", "<|multi_cite_40_3|>": "ss-683632", "<|cite_41|>": "arxiv-201429", "<|cite_42|>": "arxiv-361520", "<|cite_43|>": "ss-936651", "<|multi_cite_44_1|>": "arxiv-137757", "<|multi_cite_44_2|>": "arxiv-257127", "<|multi_cite_44_3|>": "arxiv-463661", "<|multi_cite_45_1|>": "ss-936653", "<|multi_cite_45_2|>": "arxiv-201429", "<|multi_cite_45_3|>": "arxiv-222871", "<|multi_cite_45_4|>": "ss-936656", "<|multi_cite_45_5|>": "arxiv-361520", "<|multi_cite_45_6|>": "ss-2477881", "<|multi_cite_45_7|>": "arxiv-432308", "<|multi_cite_45_8|>": "arxiv-430920", "<|multi_cite_45_9|>": "arxiv-440090", "<|multi_cite_45_10|>": "ss-683633", "<|multi_cite_45_11|>": "ss-936658", "<|multi_cite_45_12|>": "ss-683632", "<|multi_cite_45_13|>": "ss-1476786", "<|multi_cite_46_1|>": "ss-2557066", "<|multi_cite_46_2|>": "ss-912046", "<|multi_cite_46_3|>": "ss-936654", "<|multi_cite_46_4|>": "ss-936655", "<|multi_cite_46_5|>": "ss-936651", "<|cite_47|>": "ss-936657", "<|multi_cite_48_1|>": "arxiv-449953", "<|multi_cite_48_2|>": "ss-936659", "<|multi_cite_48_3|>": "ss-1476785", "<|cite_49|>": "ss-958971", "<|cite_51|>": "arxiv-449953", "<|cite_52|>": "ss-936659", "<|cite_53|>": "ss-1476785", "<|multi_cite_54_1|>": "arxiv-329923", "<|multi_cite_54_2|>": "arxiv-415248", "<|multi_cite_54_3|>": "arxiv-370351", "<|multi_cite_54_4|>": "arxiv-328437", "<|multi_cite_54_5|>": "arxiv-448318", "<|multi_cite_54_6|>": "arxiv-463661"} |
2105.06575 | <|paper_start|> Title: Merit and Blame Assignment with Kind 2
Abstract: Merit and Blame Assignment with Kind 2: We introduce two new major features of the open-source model checker Kind 2 which provide traceability information between specification and design elements such as assumptions, guarantees, or other behavioral constraints in synchronous reactive system models. This new version of Kind 2 can identify minimal sets of design elements, known as Minimal Inductive Validity Cores, which are sufficient to prove a given set of safety properties, and also determine the set of MUST elements, design elements that are necessary to prove the given properties. In addition, Kind 2 is able to find minimal sets of design constraints, known as Minimal Cut Sets, whose violation leads the system to an unsafe state. The computed information can be used for several purposes, including assessing the quality of a system specification, tracking the safety impact of model changes, and analyzing the tolerance and resilience of a system against faults or cyber-attacks. We describe these new capabilities in some detail and report on an initial experimental evaluation of some of them.
Introduction
\label{sec:introduction}
\kind is an SMT-based model checker for safety properties of finite- and
infinite-state synchronous reactive systems. It takes as input models written
in an extension of the Lustre language <|cite_start|> (Reference: Programming and Verifying Real-Time Systems by Means of the Synchronous Data-Flow Language {LUSTRE: The benefits of using a synchronous data-flow language for programming critical real-time systems are investigated. These benefits concern ergonomy (since the dataflow approach meets traditional description tools used in this domain) and ability to support formal design and verification methods. It is shown, using a simple example, how the language LUSTRE and its associated verification tool LESAR, can be used to design a program, to specify its critical properties, and to verify these properties. As the language LUSTRE and its uses have already been discussed in several papers, emphasis is put on program verification. >) <|cite_end|> that allows the specification of
assume-guarantee-style contracts for system components.
\kind's contract language <|cite_start|> (Reference: Software Engineering and Formal Methods : 14th International Conference, SEFM 2016, Held as Part of STAF 2016, Vienna, Austria, July 4-8, 2016, Proceedings: This book constitutes the proceedings of the 14th International Conference on Software Engineering and Formal Methods, SEFM 2016, held as part of STAF 2016, in Vienna, Austria, in July 2016. The 20 full and 5 short papers presented in this volume were carefully reviewed and selected from 88 submissions. They were organized in topical sections named: concurrency and non-interference; program analysis; model checking; verification; interaction and adaptation; and development methods) <|cite_end|> is expressive enough to allow one
to represent any (LTL) regular safety property by recasting it
in terms of invariant properties.
One of \kind's distinguishing features is its support for modular
and compositional analysis of hierarchical and multi-component systems.
\kind traverses the subsystem hierarchy bottom-up, analyzing each system component,
and performing fine-grained abstraction and refinement of the sub-components.
At the component level, \kind runs concurrently several model checking engines
which cooperate to prove or disprove contracts and properties.
In particular, it combines
two induction-based model checking techniques, $k$-induction <|cite_start|> (Reference: Formal methods in computer-aided design : third international conference, FMCAD 2000, Austin, TX, USA, November 1-3, 2000 : proceedings: Applications of Hierarchical Verification in Model Checking.- Applications of Hierarchical Verification in Model Checking.- Invited Talk.- Trends in Computing.- Invited Paper.- A Case Study in Formal Verification of Register-Transfer Logic with ACL2: The Floating Point Adder of the AMD Athlon TM Processor.- Contributed Papers.- An Algorithm for Strongly Connected Component Analysis in n log n Symbolic Steps.- Automated Refinement Checking for Asynchronous Processes.- Border-Block Triangular Form and Conjunction Schedule in Image Computation.- B2M: A Semantic Based Tool for BLIF Hardware Descriptions.- Checking Safety Properties Using Induction and a SAT-Solver.- Combining Stream-Based and State-Based Verification Techniques.- A Comparative Study of Symbolic Algorithms for the Computation of Fair Cycles.- Correctness of Pipelined Machines.- Do You Trust Your Model Checker?.- Executable Protocol Specification in ESL.- Formal Verification of Floating Point Trigonometric Functions.- Hardware Modeling Using Function Encapsulation.- A Methodology for the Formal Analysis of Asynchronous Micropipelines.- A Methodology for Large-Scale Hardware Verification.- Model Checking Synchronous Timing Diagrams.- Model Reductions and a Case Study.- Modeling and Parameters Synthesis for an Air TrafficManagement System.- Monitor-Based Formal Specification of PCI.- SAT-Based Image Computation with Application in Reachability Analysis.- SAT-Based Verification without State Space Traversal.- Scalable Distributed On-the-Fly Symbolic Model Checking.- The Semantics of Verilog Using Transition System Combinators.- Sequential Equivalence Checking by Symbolic Simulation.- Speeding Up Image Computation by Using RTL Information.- Symbolic Checking of Signal-Transition Consistency for Verifying High-Level Designs.- Symbolic Simulation with Approximate Values.- A Theory of Consistency for Modular Synchronous Systems.- Verifying Transaction Ordering Properties in Unbounded Bus Networks through Combined Deductive/Algorithmic Methods.- Visualizing System Factorizations with Behavior Tables.) <|cite_end|> and
IC3 <|cite_start|> (Reference: Verification, Model Checking, and Abstract Interpretation - 12th International Conference, VMCAI 2011, Austin, TX, USA, January 23-25, 2011. Proceedings: ) <|cite_end|>, with various auxiliary invariant generation methods.
One clear strength of model checkers is their ability to return precise
error traces witnessing the violation of a given safety property.
In addition to being invaluable to help identify and correct
bugs, error traces also represent a checkable unsafety certificate.
Similarly, many model checkers are currently able to return
some form of corroborating evidence when they declare a safety
property to be satisfied by a system under analysis.
For instance, \kind can produce
an independently checkable proof certificate for the properties that
it claims to have proven <|cite_start|> (Reference: 2016 Formal Methods in Computer-Aided Design, FMCAD 2016, Mountain View, CA, USA, October 3-6, 2016: ) <|cite_end|>. However, these certificates,
in the form of a $k$-inductive invariant, give limited user-level
insight on what elements of the system model contribute to
the satisfaction of the properties.
\medskip
\noindent
\textbf{Contributions}
We describe two new features of \kind
that provide more insights on verified properties:
(1) the identification of minimal sets of model elements that are
sufficient to prove a given set of safety properties, as well as
the subset of design elements that are necessary to prove the
given properties;
(2) the computation of minimal sets of design constraints whose violation leads
the system to falsify one of more of the given properties.
\medskip
Although these pieces of information are closely related, as we explain later,
each of them can be naturally mapped to a typical use case in model-based
software development: respectively,
\emph{merit assignment} and \emph{blame assignment}.
With the former the focus is on assessing the quality of a system specification,
tracking the safety impact of model changes, and assisting in the synthesis of
optimal implementations. With the latter, the goal is to determine
the tolerance and resilience of a system against faults or cyber-attacks.
In general, proof-based traceability information can be used to perform
a variety of engineering analyses,
including vacuity detection <|cite_start|> (Reference: Vacuity detection in temporal model checking
: ) <|cite_end|>;
coverage analysis <|cite_start|> (Reference: Proceedings of the 47th Design Automation Conference, DAC 2010, Anaheim, California, USA, July 13-18, 2010: ) <|cite_end|> <|cite_start|> (Reference: Proceedings of the 32nd IEEE/ACM International Conference on Program Comprehension: ) <|cite_end|>;
impact analysis <|cite_start|> (Reference: 2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA): Safety and efficiency of modern industrial plants can be improved by providing operators with effective digital assistants to diagnose abnormal situations occurring in the plant. To make sense of a large number of alarms, root cause analysis can help pinpoint the origin of an abnormal situation. We investigate the translation of qualitative causal models into Bayesian belief networks (BBN) to utilize efficient tools for probability inference. The diagnosis result of a fault scenario of the Tennessee-Eastman-Process highlight the feasibility of the principle approach and the ongoing research aims to fully leverage the potential of BBN. Keywords—Bayesian methods, Expert systems, Process Control, Fault diagnosis, Fault-trees) <|cite_end|>, design optimization;
and robustness analysis <|cite_start|> (Reference: 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC): ) <|cite_end|>.
Identifying which model elements are required for a proof,
and assessing the relative importance of different model elements is critical
to determine the quality of the overall model (including its assume-guarantee
specification), determining when and where to implement changes,
identifying components that need to be reverified, and measure
the tolerance and resilience of the system against faults and attacks.
Related Work
The computation of an approximate MIVC was first available in
the open-source model checker \jkind around
the same time the technique was introduced in <|cite_start|> (Reference: Proceedings of the 24th ACM Conference on Economics and Computation: ) <|cite_end|>.
More recently, \jkind started to offer support for
the computation of all MIVCs based on
the \emph{offline} algorithm decribed in <|cite_start|> (Reference: 2017 Formal Methods in Computer Aided Design, FMCAD 2017, Vienna, Austria, October 2-6, 2017: ) <|cite_end|>.
The algorithm is considered \emph{offline} because it is not
until all IVCs have been computed that one knows whether the
solutions computed are, in fact, minimal. For
models contain many IVCs, this approach can be impractically expensive
or simply not terminate. However, for applications where only a full
enumeration of the MIVCs is
this technique may offer better overall performance. The main idea is
to use algorithm \ivcuc for the minimization of the IVC in
line~\ref{ivc:line:minIVC} of Algorithm~\ref{alg:AllMIVCs},
as opposed to the more expensive algorithm \ivcucbf that
ensures minimality, and to not minimize the cut set in
line~\ref{ivc:line:getMCS} before adding it to the map.
Although not part of the official distribution of \jkind,
the \emph{online} algorithm for computing all MIVCs
presented in <|cite_start|> (Reference: Software Engineering and Formal Methods: 21st International Conference, SEFM 2023, Eindhoven, The Netherlands, November 6-10, 2023, Proceedings: ) <|cite_end|> has also been implemented
in the tool. Similarly to Algorithm~\ref{alg:AllMIVCs},
it incorporates the idea of reducing the cardinality of
the cut sets generated when calls to \textsf{Verify} returns
unsafe. In contrast, the method only tries to reduce
the cardinality when \textsf{Verify} returns unsafe within
algorithm \ivcucbf, not in the main loop of
Algorithm~\ref{alg:AllMIVCs}.
Moreover, the reduction is based on retrieving a maximal
set of \emph{map} that contains the seed,
and checking whether the subset is a IVC or not.
If it is not a IVC, the complement of the subset is
an approximation of a MCS.
Otherwise, approximate MIVCs are computed and used
to reduce the elements of the seed until the subset
is not a IVC anymore.
Unlike \kind, \jkind does not have native support for assume-guarantee
contracts in its input language. Thus, \jkind only considers equations
for the generation of IVCs. In contrast, \kind allows the user to
select
not only design elements
such as node calls and equations but also specification elements
such as assumptions and guarantees.
An algorithm for computing all MCSs is described by Bozzano et al. <|cite_start|> (Reference: Computer aided verification : 27th international conference, CAV 2015, San Francisco, CA, USA, July 18-24, 2015 : proceedings: A Trusted Mechanised Specification of JavaScript: One Year On.- Model Checking and Refinements.- On Automation of CTL* Verification for Infinite-State Systems.- Algorithms for Model Checking HyperLTL and HyperCTL .- Fairness Modulo Theory: A New Approach to LTL Software Model Checking.- Model Checking Parameterized Asynchronous Shared-Memory Systems.- SMT and POR Beat Counter Abstraction: Parameterized Model Checking of Threshold-Based Distributed Algorithms.- Skipping Refinement.- Quantitative Reasoning.- Percentile Queries in Multi-dimensional Markov Decision Processes.- Faster Algorithms for Quantitative Verification in Constant Treewidth Graphs.- Counterexample Explanation by Learning Small Strategies in Markov Decision Processes.- Symbolic Polytopes for Quantitative Interpolation and Verification.- Adaptive Aggregation of Markov Chains: Quantitative Analysis of Chemical Reaction Networks.- PROPhESY: A PRObabilistic ParamEter SYnthesis Tool.- Software Analysis.- Effective Search-Space Pruning for Solvers of String Equations, Regular Expressions and Length Constraints.- Automata-Based Model Counting for String Constraints.- OpenJDK's Java.utils.Collection.sort() Is Broken: The Good, the Bad and the Worst Case.- Tree Buffers.- Learning Commutativity.- Specifications.- Angelic Verification: Precise Verification Modulo Unknowns.- The SeaHorn Verification Framework.- Automatic Rootcausing for Program Equivalence Failures in Binaries.- Fine-Grained Caching of Verification Results.- Predicting a Correct Program in Programming by Example.- Abstract Interpretation with Higher-Dimensional Ellipsoids and Conic Extrapolation.- Lightning Talks.- ADAM: Causality-Based Synthesis of Distributed Systems.- Alchemist: Learning Guarded Affine Functions.- OptiMathSAT: A Tool for Optimization Modulo Theories.- Systematic Asynchrony Bug Exploration for Android Apps.- Norn: An SMT Solver for String Constraints.- PVSio-web 2.0: Joining PVS to HCI.- The Hanoi Omega-Automata Format.- The Open-Source LearnLib: A Framework for Active Automata Learning.- BBS: A Phase-Bounded Model Checker for Asynchronous Programs.- Time-Aware Abstractions in HybridSal.- A Type-Directed Approach to Program Repair.- Formal Design and Safety Analysis of AIR6110 Wheel Brake System.- Meeting a Powertrain Verification Challenge.- Synthesising Executable Gene Regulatory Networks from Single-Cell Gene Expression Data.- Empirical Software Metrics for Benchmarking of Verification Tools.- Interpolation, IC3/PDR, and Invariants Property-Directed Inference of Universal Invariants or Proving Their Absence.- Efficient Anytime Techniques for Model-Based Safety Analysis.- Boosting k-induction with Continuously-Refined Invariants.- Fast Interpolating BMC.- Counterexample-Guided Polynomial Loop Invariant Generation by Lagrange Interpolation.) <|cite_end|>.
Like our Algorithm~\ref{alg:AllMCSs}, their technique computes the cuts sets
of increasing cardinality to prevent the generation of non-miminal solutions.
However, their method relies on a IC3-based routine for parameter
synthesis to compute all the solutions in each layer. Therefore,
instead of relying on a black-box \textsf{Verify} procedure to solve
multiple ordinary model checking queries, they use a specialized algorithm.
The main advantage in that case is that the information learnt to block a particular
counterexample can be reused when considering new ones.
\bibliographystyle{splncs04}
\bibliography{main.bib}
\end{document} <|paper_end|> | [
"<|reference_start|> Software Engineering and Formal Methods : 14th International Conference, SEFM 2016, Held as Part of STAF 2016, Vienna, Austria, July 4-8, 2016, Proceedings: This book constitutes the proceedings of the 14th International Conference on Software Engineering and Formal Methods, SEFM 2016, held as part of STAF 2016, in Vienna, Austria, in July 2016. The 20 full and 5 short papers presented in this volume were carefully reviewed and selected from 88 submissions. They were organized in topical sections named: concurrency and non-interference; program analysis; model checking; verification; interaction and adaptation; and development methods <|reference_end|>",
"<|reference_start|> 2016 Formal Methods in Computer-Aided Design, FMCAD 2016, Mountain View, CA, USA, October 3-6, 2016: <|reference_end|>",
"<|reference_start|> Proceedings of the 24th ACM Conference on Economics and Computation: <|reference_end|>",
"<|reference_start|> Computer aided verification : 27th international conference, CAV 2015, San Francisco, CA, USA, July 18-24, 2015 : proceedings: A Trusted Mechanised Specification of JavaScript: One Year On.- Model Checking and Refinements.- On Automation of CTL* Verification for Infinite-State Systems.- Algorithms for Model Checking HyperLTL and HyperCTL .- Fairness Modulo Theory: A New Approach to LTL Software Model Checking.- Model Checking Parameterized Asynchronous Shared-Memory Systems.- SMT and POR Beat Counter Abstraction: Parameterized Model Checking of Threshold-Based Distributed Algorithms.- Skipping Refinement.- Quantitative Reasoning.- Percentile Queries in Multi-dimensional Markov Decision Processes.- Faster Algorithms for Quantitative Verification in Constant Treewidth Graphs.- Counterexample Explanation by Learning Small Strategies in Markov Decision Processes.- Symbolic Polytopes for Quantitative Interpolation and Verification.- Adaptive Aggregation of Markov Chains: Quantitative Analysis of Chemical Reaction Networks.- PROPhESY: A PRObabilistic ParamEter SYnthesis Tool.- Software Analysis.- Effective Search-Space Pruning for Solvers of String Equations, Regular Expressions and Length Constraints.- Automata-Based Model Counting for String Constraints.- OpenJDK's Java.utils.Collection.sort() Is Broken: The Good, the Bad and the Worst Case.- Tree Buffers.- Learning Commutativity.- Specifications.- Angelic Verification: Precise Verification Modulo Unknowns.- The SeaHorn Verification Framework.- Automatic Rootcausing for Program Equivalence Failures in Binaries.- Fine-Grained Caching of Verification Results.- Predicting a Correct Program in Programming by Example.- Abstract Interpretation with Higher-Dimensional Ellipsoids and Conic Extrapolation.- Lightning Talks.- ADAM: Causality-Based Synthesis of Distributed Systems.- Alchemist: Learning Guarded Affine Functions.- OptiMathSAT: A Tool for Optimization Modulo Theories.- Systematic Asynchrony Bug Exploration for Android Apps.- Norn: An SMT Solver for String Constraints.- PVSio-web 2.0: Joining PVS to HCI.- The Hanoi Omega-Automata Format.- The Open-Source LearnLib: A Framework for Active Automata Learning.- BBS: A Phase-Bounded Model Checker for Asynchronous Programs.- Time-Aware Abstractions in HybridSal.- A Type-Directed Approach to Program Repair.- Formal Design and Safety Analysis of AIR6110 Wheel Brake System.- Meeting a Powertrain Verification Challenge.- Synthesising Executable Gene Regulatory Networks from Single-Cell Gene Expression Data.- Empirical Software Metrics for Benchmarking of Verification Tools.- Interpolation, IC3/PDR, and Invariants Property-Directed Inference of Universal Invariants or Proving Their Absence.- Efficient Anytime Techniques for Model-Based Safety Analysis.- Boosting k-induction with Continuously-Refined Invariants.- Fast Interpolating BMC.- Counterexample-Guided Polynomial Loop Invariant Generation by Lagrange Interpolation. <|reference_end|>"
] | [
1,
4,
10,
13
] | {"<|cite_2|>": "ss-1409212", "<|cite_3|>": "ss-1409213", "<|cite_4|>": "ss-1409214", "<|cite_5|>": "ss-1409215", "<|cite_6|>": "ss-1409216", "<|cite_7|>": "ss-925771", "<|multi_cite_8_1|>": "ss-1409217", "<|multi_cite_8_2|>": "ss-1409218", "<|cite_9|>": "ss-1409219", "<|multi_cite_10_1|>": "ss-677491", "<|cite_12|>": "ss-1537273", "<|cite_13|>": "ss-1409220", "<|cite_14|>": "ss-1409221", "<|cite_15|>": "ss-1950062"} |
2404.19250 | <|paper_start|> Title: Enhancing Intrinsic Features for Debiasing via Investigating Class-Discerning Common Attributes in Bias-Contrastive Pair
Abstract: Enhancing Intrinsic Features for Debiasing via Investigating Class-Discerning Common Attributes in Bias-Contrastive Pair: In the image classification task, deep neural networks frequently rely on bias attributes that are spuriously correlated with a target class in the presence of dataset bias, resulting in degraded performance when applied to data without bias attributes. The task of debiasing aims to compel classifiers to learn intrinsic attributes that inherently define a target class rather than focusing on bias attributes. While recent approaches mainly focus on emphasizing the learning of data samples without bias attributes (i.e., bias-conflicting samples) compared to samples with bias attributes (i.e., bias-aligned samples), they fall short of directly guiding models where to focus for learning intrinsic features. To address this limitation, this paper proposes a method that provides the model with explicit spatial guidance that indicates the region of intrinsic features. We first identify the intrinsic features by investigating the class-discerning common features between a bias-aligned (BA) sample and a bias-conflicting (BC) sample (i.e., bias-contrastive pair). Next, we enhance the intrinsic features in the BA sample that are relatively under-exploited for prediction compared to the BC sample. To construct the bias-contrastive pair without using bias information, we introduce a bias-negative score that distinguishes BC samples from BA samples employing a biased model. The experiments demonstrate that our method achieves state-of-the-art performance on synthetic and real-world datasets with various levels of bias severity.
Introduction
Deep neural networks in image classification <|cite_start|> (Reference: Going Deeper with Convolutions: We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.) <|cite_end|> <|cite_start|> (Reference: Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.) <|cite_end|> <|cite_start|> (Reference: Very Deep Convolutional Networks for Large-Scale Image Recognition: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.) <|cite_end|> <|cite_start|> (Reference: Wide Residual Networks: Deep residual networks were shown to be able to scale up to thousands of layers and still have improving performance. However, each fraction of a percent of improved accuracy costs nearly doubling the number of layers, and so training very deep residual networks has a problem of diminishing feature reuse, which makes these networks very slow to train. To tackle these problems, in this paper we conduct a detailed experimental study on the architecture of ResNet blocks, based on which we propose a novel architecture where we decrease depth and increase width of residual networks. We call the resulting network structures wide residual networks (WRNs) and show that these are far superior over their commonly used thin and very deep counterparts. For example, we demonstrate that even a simple 16-layer-deep wide residual network outperforms in accuracy and efficiency all previous deep residual networks, including thousand-layer-deep networks, achieving new state-of-the-art results on CIFAR, SVHN, COCO, and significant improvements on ImageNet. Our code and models are available at https://github.com/szagoruyko/wide-residual-networks) <|cite_end|> are known to be vulnerable to the dataset bias <|cite_start|> (Reference: Unbiased look at dataset bias.: Datasets are an integral part of contemporary object recognition research. They have been the chief reason for the considerable progress in the field, not just as source of large amounts of training data, but also as means of measuring and comparing performance of competing algorithms. At the same time, datasets have often been blamed for narrowing the focus of object recognition research, reducing it to a single benchmark performance number. Indeed, some datasets, that started out as data capture efforts aimed at representing the visual world, have become closed worlds unto themselves (e.g. the Corel world, the Caltech-101 world, the PASCAL VOC world). With the focus on beating the latest benchmark numbers on the latest dataset, have we perhaps lost sight of the original purpose? The goal of this paper is to take stock of the current state of recognition datasets. We present a comparison study using a set of popular datasets, evaluated based on a number of criteria including: relative data bias, cross-dataset generalization, effects of closed-world assumption, and sample value. The experimental results, some rather surprising, suggest directions that can improve dataset collection as well as algorithm evaluation protocols. But more broadly, the hope is to stimulate discussion in the community regarding this very important, but largely neglected issue.) <|cite_end|>, which refers to a spurious correlation between the target classes and the peripheral attributes.
Basically, image classification aims to learn intrinsic attributes — the visual features that inherently define a target class — that generally appear across the samples in the class.
However, when the dataset bias exists in the training data, the models tend to use the frequently appearing peripheral attribute (\ie, bias attribute) to predict the class unintentionally.
For instance, if airplanes in the training images are mostly in the sky, a model can heavily rely on the sky to predict an image as an airplane class due to its high correlation with the airplane class.
This indicates that the model is biased towards the bias attribute (\eg, sky) rather than focusing on intrinsic features (\eg, the shape of wings or the body) when making decisions.
As a result, even though the biased model achieves high accuracy on the samples including bias attributes (\eg, airplanes in the sky), termed as bias-aligned (BA) samples, it may fail to accurately predict samples devoid of such bias attributes (\eg, airplanes on the runway), referred to as bias-conflicting (BC) samples.
In this regard, debiasing aims to encourage the model to focus on intrinsic attributes rather than bias attributes when dataset bias exists.
One straightforward approach is utilizing prior knowledge regarding bias (\eg, labels for bias attribute) to inform the model which attributes to focus on or not to focus on <|cite_start|> (Reference: Learning Not to Learn: Training Deep Neural Networks with Biased Data: We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased. Since a neural network efficiently learns data distribution, a network is likely to learn the bias information to categorize input data. It leads to poor performance at test time, if the bias is, in fact, irrelevant to the categorization. In this paper, we formulate a regularization loss based on mutual information between feature embedding and bias. Based on the idea of minimizing this mutual information, we propose an iterative algorithm to unlearn the bias information. We employ an additional network to predict the bias distribution and train the network adversarially against the feature embedding network. At the end of learning, the bias prediction network is not able to predict the bias not because it is poorly trained, but because the feature embedding network successfully unlearns the bias information. We also demonstrate quantitative and qualitative experimental results which show that our algorithm effectively removes the bias information from feature embedding.) <|cite_end|> <|cite_start|> (Reference: Learning Robust Representations by Projecting Superficial Statistics Out: Despite impressive performance as evaluated on i.i.d. holdout data, deep neural networks depend heavily on superficial statistics of the training data and are liable to break under distribution shift. For example, subtle changes to the background or texture of an image can break a seemingly powerful classifier. Building on previous work on domain generalization, we hope to produce a classifier that will generalize to previously unseen domains, even when domain identifiers are not available during training. This setting is challenging because the model may extract many distribution-specific (superficial) signals together with distribution-agnostic (semantic) signals. To overcome this challenge, we incorporate the gray-level co-occurrence matrix (GLCM) to extract patterns that our prior knowledge suggests are superficial: they are sensitive to the texture but unable to capture the gestalt of an image. Then we introduce two techniques for improving our networks' out-of-sample performance. The first method is built on the reverse gradient method that pushes our model to learn representations from which the GLCM representation is not predictable. The second method is built on the independence introduced by projecting the model's representation onto the subspace orthogonal to GLCM representation's. We test our method on the battery of standard domain generalization data sets and, interestingly, achieve comparable or better performance as compared to other domain generalization methods that explicitly require samples from the target distribution for training.) <|cite_end|> <|cite_start|> (Reference: Learning De-biased Representations with Biased Representations: Many machine learning algorithms are trained and evaluated by splitting data from a single source into training and test sets. While such focus on in-distribution learning scenarios has led to interesting advancement, it has not been able to tell if models are relying on dataset biases as shortcuts for successful prediction (e.g., using snow cues for recognising snowmobiles), resulting in biased models that fail to generalise when the bias shifts to a different class. The cross-bias generalisation problem has been addressed by de-biasing training data through augmentation or re-sampling, which are often prohibitive due to the data collection cost (e.g., collecting images of a snowmobile on a desert) and the difficulty of quantifying or expressing biases in the first place. In this work, we propose a novel framework to train a de-biased representation by encouraging it to be different from a set of representations that are biased by design. This tactic is feasible in many scenarios where it is much easier to define a set of biased representations than to define and quantify bias. We demonstrate the efficacy of our method across a variety of synthetic and real-world biases; our experiments show that the method discourages models from taking bias shortcuts, resulting in improved generalisation. Source code is available at https://github.com/clovaai/rebias.) <|cite_end|> <|cite_start|> (Reference: EnD: Entangling and Disentangling deep representations for bias correction: Artificial neural networks perform state-of-the-art in an ever-growing number of tasks, and nowadays they are used to solve an incredibly large variety of tasks. There are problems, like the presence of biases in the training data, which question the generalization capability of these models. In this work we propose EnD, a regularization strategy whose aim is to prevent deep models from learning unwanted biases. In particular, we insert an "information bottleneck" at a certain point of the deep neural network, where we disentangle the information about the bias, still letting the useful information for the training task forward-propagating in the rest of the model. One big advantage of EnD is that we do not require additional training complexity (like decoders or extra layers in the model), since it is a regularizer directly applied on the trained model. Our experiments show that EnD effectively improves the generalization on unbiased test sets, and it can be effectively applied on real-case scenarios, like removing hidden biases in the COVID-19 detection from radiographic images.) <|cite_end|>.
However, acquiring such bias information is often infeasible in real-world scenarios.
Therefore, recent studies <|cite_start|> (Reference: Learning from Failure: Training Debiased Classifier from Biased Classifier: Neural networks often learn to make predictions that overly rely on spurious correlation existing in the dataset, which causes the model to be biased. While previous work tackles this issue by using explicit labeling on the spuriously correlated attributes or presuming a particular bias type, we instead utilize a cheaper, yet generic form of human knowledge, which can be widely applicable to various types of bias. We first observe that neural networks learn to rely on the spurious correlation only when it is "easier" to learn than the desired knowledge, and such reliance is most prominent during the early phase of training. Based on the observations, we propose a failure-based debiasing scheme by training a pair of neural networks simultaneously. Our main idea is twofold; (a) we intentionally train the first network to be biased by repeatedly amplifying its "prejudice", and (b) we debias the training of the second network by focusing on samples that go against the prejudice of the biased network in (a). Extensive experiments demonstrate that our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets. Surprisingly, our framework even occasionally outperforms the debiasing methods requiring explicit supervision of the spuriously correlated attributes.) <|cite_end|> <|cite_start|> (Reference: Just Train Twice: Improving Group Robustness without Training Group Information: Standard training via empirical risk minimization (ERM) can produce models that achieve high accuracy on average but low accuracy on certain groups, especially in the presence of spurious correlations between the input and label. Prior approaches that achieve high worst-group accuracy, like group distributionally robust optimization (group DRO) require expensive group annotations for each training point, whereas approaches that do not use such group annotations typically achieve unsatisfactory worst-group accuracy. In this paper, we propose a simple two-stage approach, JTT, that first trains a standard ERM model for several epochs, and then trains a second model that upweights the training examples that the first model misclassified. Intuitively, this upweights examples from groups on which standard ERM models perform poorly, leading to improved worst-group performance. Averaged over four image classification and natural language processing tasks with spurious correlations, JTT closes 75% of the gap in worst-group accuracy between standard ERM and group DRO, while only requiring group annotations on a small validation set in order to tune hyperparameters.) <|cite_end|> <|cite_start|> (Reference: Revisiting the Importance of Amplifying Bias for Debiasing: In image classification, "debiasing" aims to train a classifier to be less susceptible to dataset bias, the strong correlation between peripheral attributes of data samples and a target class. For example, even if the frog class in the dataset mainly consists of frog images with a swamp background (i.e., bias-aligned samples), a debiased classifier should be able to correctly classify a frog at a beach (i.e., bias-conflicting samples). Recent debiasing approaches commonly use two components for debiasing, a biased model $f_B$ and a debiased model $f_D$. $f_B$ is trained to focus on bias-aligned samples (i.e., overfitted to the bias) while $f_D$ is mainly trained with bias-conflicting samples by concentrating on samples which $f_B$ fails to learn, leading $f_D$ to be less susceptible to the dataset bias. While the state-of-the-art debiasing techniques have aimed to better train $f_D$, we focus on training $f_B$, an overlooked component until now. Our empirical analysis reveals that removing the bias-conflicting samples from the training set for $f_B$ is important for improving the debiasing performance of $f_D$. This is due to the fact that the bias-conflicting samples work as noisy samples for amplifying the bias for $f_B$ since those samples do not include the bias attribute. To this end, we propose a simple yet effective data sample selection method which removes the bias-conflicting samples to construct a bias-amplified dataset for training $f_B$. Our data sample selection method can be directly applied to existing reweighting-based debiasing approaches, obtaining consistent performance boost and achieving the state-of-the-art performance on both synthetic and real-world datasets.) <|cite_end|> <|cite_start|> (Reference: Learning Debiased Representation via Disentangled Feature Augmentation: Image classification models tend to make decisions based on peripheral attributes of data items that have strong correlation with a target variable (i.e., dataset bias). These biased models suffer from the poor generalization capability when evaluated on unbiased datasets. Existing approaches for debiasing often identify and emphasize those samples with no such correlation (i.e., bias-conflicting) without defining the bias type in advance. However, such bias-conflicting samples are significantly scarce in biased datasets, limiting the debiasing capability of these approaches. This paper first presents an empirical analysis revealing that training with "diverse" bias-conflicting samples beyond a given training set is crucial for debiasing as well as the generalization capability. Based on this observation, we propose a novel feature-level data augmentation technique in order to synthesize diverse bias-conflicting samples. To this end, our method learns the disentangled representation of (1) the intrinsic attributes (i.e., those inherently defining a certain class) and (2) bias attributes (i.e., peripheral attributes causing the bias), from a large number of bias-aligned samples, the bias attributes of which have strong correlation with the target variable. Using the disentangled representation, we synthesize bias-conflicting samples that contain the diverse intrinsic attributes of bias-aligned samples by swapping their latent features. By utilizing these diversified bias-conflicting features during the training, our approach achieves superior classification accuracy and debiasing results against the existing baselines on synthetic and real-world datasets.) <|cite_end|> <|cite_start|> (Reference: SelecMix: Debiased Learning by Contradicting-pair Sampling: Neural networks trained with ERM (empirical risk minimization) sometimes learn unintended decision rules, in particular when their training data is biased, i.e., when training labels are strongly correlated with undesirable features. To prevent a network from learning such features, recent methods augment training data such that examples displaying spurious correlations (i.e., bias-aligned examples) become a minority, whereas the other, bias-conflicting examples become prevalent. However, these approaches are sometimes difficult to train and scale to real-world data because they rely on generative models or disentangled representations. We propose an alternative based on mixup, a popular augmentation that creates convex combinations of training examples. Our method, coined SelecMix, applies mixup to contradicting pairs of examples, defined as showing either (i) the same label but dissimilar biased features, or (ii) different labels but similar biased features. Identifying such pairs requires comparing examples with respect to unknown biased features. For this, we utilize an auxiliary contrastive model with the popular heuristic that biased features are learned preferentially during training. Experiments on standard benchmarks demonstrate the effectiveness of the method, in particular when label noise complicates the identification of bias-conflicting examples.) <|cite_end|> have proposed debiasing methods that do not require bias information.
They identify and emphasize BC samples during the training using an additional biased classifier that mainly learns the bias attributes.
However, such a training strategy fails to directly indicate where the model should focus to learn the intrinsic features.
To address this issue, we present a debiasing approach that explicitly informs the model of the region of the intrinsic features during the training while not using bias labels.
While the intrinsic features in the unbiased dataset can simply be identified in generally appearing features in the training samples, generally appearing features in the biased dataset inevitably include bias features.
Therefore, we identify the intrinsic features in the biased dataset by investigating the common features between a BA and a BC sample (i.e., a bias-contrastive pair).
Here, the common features also need to be class-discerning since the common features might include irrelevant environmental features.
For example, in the above scenario, the common feature between an airplane in the sky (BA sample) and an airplane on the runway (BC sample) might include the features of wings, the body, and trees.
In this case, the intrinsic features are the shape of the wings and the body that can distinguish the airplane class from the others.
Specifically, we introduce an intrinsic feature enhancement (IE) weight that identifies the spatial regions of intrinsic features commonly appearing in a bias-contrastive pair.
We leverage an auxiliary sample in addition to the original input to construct the bias-contrastive pair.
Since the majority of the original input from training samples are BA samples, we mainly adopt the BC samples as the auxiliary sample.
To achieve this without bias information, we present a bias-negative (BN) score that identifies BC samples by employing a classification loss of a biased model.
Our IE weight investigates common features in the bias-contrastive pair and identifies the class-discerning features among the common features.
Within the identified intrinsic features, we enhance the features that are relatively under-exploited in the BA samples compared to the BC samples.
In this way, we can explicitly provide our model with spatial guidance for intrinsic attributes while not using bias labels.
We verify the effectiveness of our method on both synthetic and real-world datasets with various levels of bias severity.
Furthermore, the in-depth analysis demonstrates that our method successfully guides the model to make predictions based on the intrinsic features.
\begin{abstract}
In the image classification task, deep neural networks frequently rely on bias attributes that are spuriously correlated with a target class in the presence of dataset bias, resulting in degraded performance when applied to data without bias attributes.
The task of debiasing aims to compel classifiers to learn intrinsic attributes that inherently define a target class rather than focusing on bias attributes.
While recent approaches mainly focus on emphasizing the learning of data samples without bias attributes (i.e., bias-conflicting samples) compared to samples with bias attributes (i.e., bias-aligned samples), they fall short of directly guiding models where to focus for learning intrinsic features.
To address this limitation, this paper proposes a method that provides the model with explicit spatial guidance that indicates the region of intrinsic features.
We first identify the intrinsic features by investigating the class-discerning common features between a bias-aligned (BA) sample and a bias-conflicting (BC) sample (i.e., bias-contrastive pair).
Next, we enhance the intrinsic features in the BA sample that are relatively under-exploited for prediction compared to the BC sample.
To construct the bias-contrastive pair without using bias information, we introduce a bias-negative score that distinguishes BC samples from BA samples employing a biased model.
The experiments demonstrate that our method achieves state-of-the-art performance on synthetic and real-world datasets with various levels of bias severity.
\end{abstract}
\blfootnote{* indicates equal contribution.}
\vspace{-4mm} <|paper_end|> | [
"<|reference_start|> Going Deeper with Convolutions: We propose a deep convolutional neural network architecture codenamed \"Inception\", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection. <|reference_end|>",
"<|reference_start|> Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation. <|reference_end|>",
"<|reference_start|> Learning Not to Learn: Training Deep Neural Networks with Biased Data: We propose a novel regularization algorithm to train deep neural networks, in which data at training time is severely biased. Since a neural network efficiently learns data distribution, a network is likely to learn the bias information to categorize input data. It leads to poor performance at test time, if the bias is, in fact, irrelevant to the categorization. In this paper, we formulate a regularization loss based on mutual information between feature embedding and bias. Based on the idea of minimizing this mutual information, we propose an iterative algorithm to unlearn the bias information. We employ an additional network to predict the bias distribution and train the network adversarially against the feature embedding network. At the end of learning, the bias prediction network is not able to predict the bias not because it is poorly trained, but because the feature embedding network successfully unlearns the bias information. We also demonstrate quantitative and qualitative experimental results which show that our algorithm effectively removes the bias information from feature embedding. <|reference_end|>",
"<|reference_start|> Learning De-biased Representations with Biased Representations: Many machine learning algorithms are trained and evaluated by splitting data from a single source into training and test sets. While such focus on in-distribution learning scenarios has led to interesting advancement, it has not been able to tell if models are relying on dataset biases as shortcuts for successful prediction (e.g., using snow cues for recognising snowmobiles), resulting in biased models that fail to generalise when the bias shifts to a different class. The cross-bias generalisation problem has been addressed by de-biasing training data through augmentation or re-sampling, which are often prohibitive due to the data collection cost (e.g., collecting images of a snowmobile on a desert) and the difficulty of quantifying or expressing biases in the first place. In this work, we propose a novel framework to train a de-biased representation by encouraging it to be different from a set of representations that are biased by design. This tactic is feasible in many scenarios where it is much easier to define a set of biased representations than to define and quantify bias. We demonstrate the efficacy of our method across a variety of synthetic and real-world biases; our experiments show that the method discourages models from taking bias shortcuts, resulting in improved generalisation. Source code is available at https://github.com/clovaai/rebias. <|reference_end|>"
] | [
0,
1,
5,
7
] | {"<|multi_cite_1_1|>": "arxiv-66180", "<|multi_cite_1_2|>": "arxiv-88870", "<|multi_cite_1_3|>": "arxiv-65675", "<|multi_cite_1_4|>": "arxiv-98503", "<|cite_2|>": "ss-1117031", "<|multi_cite_3_1|>": "arxiv-185730", "<|multi_cite_3_2|>": "arxiv-195394", "<|multi_cite_3_3|>": "arxiv-227516", "<|multi_cite_3_4|>": "arxiv-324833", "<|multi_cite_4_1|>": "arxiv-276536", "<|multi_cite_4_2|>": "arxiv-355983", "<|multi_cite_4_3|>": "arxiv-423172", "<|multi_cite_4_4|>": "arxiv-352725", "<|multi_cite_4_5|>": "arxiv-459703"} |
1703.00206 | <|paper_start|> Title: Scaling Agile Development in Mechatronic Organizations - A Comparative Case Study
Abstract: Scaling Agile Development in Mechatronic Organizations - A Comparative Case Study: Agile software development principles enable companies to successfully and quickly deliver software by meeting their customers' expectations while focusing on high quality. Many companies working with pure software systems have adopted these principles, but implementing them in companies dealing with non-pure software products is challenging. We identified a set of goals and practices to support large-scale agile development in companies that develop software-intense mechatronic systems. We used an inductive approach based on empirical data collected during a longitudinal study with six companies in the Nordic region. The data collection took place over two years through focus group workshops, individual on-site interviews, and complementary surveys. The primary benefit of large-scale agile development is improved quality, enabled by practices that support regular or continuous integration between teams delivering software, hardware, and mechanics. In this regard, the most beneficial integration cycle for deliveries is every four weeks; while continuous integra- tion on a daily basis would favor software teams, other disciplines does not seem to benefit from faster integration cycles. We identified 108 goals and development practices supporting agile principles among the companies, most of them concerned with integration; therefrom, 26 agile practices are unique to the mechatronics domain to support adopting agile beyond pure software development teams. 16 of these practices are considered as key enablers, confirmed by our control cases.
Introduction
\label{sec:intro}
Agile software development aims at developing products that better match a
customer's expectations compared to waterfall or stage-gate methods. Typical
characteristics of agile methods are short and fixed periods consisting of
development, integration, and testing, conducted in small teams that communicate actively, both within the software team and with the customer. This flexibility allows a team to continuously reprioritize a product's features based on stakeholder feedback.
Pure software-driven companies are the typical habitat for adopting agile
with prominent examples being Google, Amazon, or Spotify. The mechatronics
domain, though, where cars are a prime example, are more challenging as
the final product combines software, hardware, and mechanics, with the
involved artifacts being of different natures and contributed from different
disciplines.
We can see two opposing trends affecting R\&D in the mechatronics domain:
Manufacturing and hardware development have long lead-times compared to pure
software products, ranging typically 1-4 years. During the product development process, focus is given to predictability, i.e.~meeting the start-of-production (SOP) with the required mechanical quality, which in practice is achieved by waterfall/stage-gate processes dictating delivery and integration cycles.
In contrast, software development is characterized by increasing speed and being more nimble while keeping quality. This typically enables lead-times of weeks or months, and many agile methods are a response to this. However, there are no established solutions to easily overcome the intersection between the aforementioned trends, but the necessity to resolve them in the mechatronics domain motivates in-depth studies to better serve the changing market's needs and to support industrial decision makers.
In this study, we compared experiences and practices from six internationally
leading companies developing and manufacturing mechatronic systems. The software teams in these companies are already following a number of common agile practices, such as small team sizes, regular stand-up meetings,
cross-functional teams, reprioritization, shared backlog, and sprint lengths of up to four weeks. All involved companies are at the threshold to scale agile principles
beyond their individual software teams to reach out to hardware and mechanics.
\subsection{Problem Statement}
\label{sec:problem}
The two trends above typically result in a situation where individual teams are be able to reprioritise and implement software features in a 2-4 weeks cycle, i.e. are agile, while the overall R\&D process is typically still governed by an overarching stage-gate or V-model. Thus, software deliveries were typically planned in time towards pre-scheduled integration points that are determined by mechanics and manufacturing development. As a result the benefits typically associated with agile development like short lead-times in launching new or updated products were not perceived by developers.
\subsection{Research Objectives}
The aim of the study is to unveil a list of agile practices that are
enablers to scale agile beyond software development teams in mechatronic organizations. These practices
scaling agile principles to also include
hardware and mechanics, neighboring groups, and R\&D departments.
Practitioners from mechatronics organizations who are transforming and adjusting their
internal development processes to accommodate the trends in Section~\ref{sec:problem}
by following large-scale agile frameworks
such as LESS and SAFe <|cite_start|> (Reference: Usage and perceptions of Scrum and and large-scale scrum (LeSS) in academic settings: ) <|cite_end|> <|cite_start|> (Reference: Issues in the Adoption of the Scaled Agile Framework: Agile methods were originally introduced for small sized, colocated teams. Their successful products immediately brought up the issue of adapting the methods also for large and distributed organizations engaged in projects to build major, complex products. Currently the most popular multi-teams agile method is the Scaled Agile Framework (SAFe) which, however, is subject to criticism: it appears to be quite demanding and expensive in terms of human resource and project management practices. Moreover, SAFe allegedly goes against some of the principles of agility. This research attempts to gather a deeper understanding of the matter first reviewing and analysing the studies published on this topic via a multivocal literature review and then with an extended empirical investigation on the matters that appear most controversial via the direct analysis of the work of 25 respondents from 17 different companies located in eight countries. Thus, the originality of this research is in the systemic assessment of the “level of flexibility” of SAFe, highlighting the challenges of adopting this framework as it relates to decision making, structure, and the technical and managerial competencies of the company. The results show that SAFe can be an effective and adequate approach if the company is ready to invest a significant effort and resources into it both in the form of providing time for SAFe to be properly absorbed and specific training for individuals.) <|cite_end|> <|cite_start|> (Reference: Scaled Agile Framework: Presentation and real world example: This case focuses on the applicability of the Scaled Agile Framework (SAFe) founded by Dean Leffingwell. Modern organizations often work with agile software engineering teams using traditional single team-level methods like Scrum, but multiple teams and the program or portfolio level are not part of methods like Scrum. SAFe tries to apply agile methodologies to the whole organisation. The real world example focuses on a key element of SAFe, the Program Increment (PI) planning meeting and how it can improve multiple team collaboration.) <|cite_end|>
benefit from this list to identify practices
supporting a large-scale agile transformation.
\subsection{Context and Limitations}
This study compared organizations with the following characteristics:
\begin{itemize}
\item Large mechatronics organizations,
\item Dealing with a large and diverse product portfolio with regular product upgrades, and
\item Where timely manufacturing plays a large role, while
\item There are strong demands on high quality and safety.
\end{itemize}
\subsection{Contributions}
During our study, we could confirm already known facts about scaled agile development,
such as the challenge of coordinating multiple teams, difficulties with managing requirements,
and hanging on to internal silos <|cite_start|> (Reference: Challenges and success factors for large-scale agile transformations: A systematic literature review: ) <|cite_end|>.
However, we also identified a number of additional challenges and benefits that are new
and unique for software-intense mechatronic systems.
The final result of the study is a set of 26 practices for agile development, which
are particular to the mechatronics domain, from which 16 are considered as enablers
in addition to well-known practices for large-scale software development to intensify
the adoption of agile beyond pure software development teams.
\subsection{Structure of the Article}
The rest of the article is structured as follows: Section~\ref{sec:method} describes the overall design of our comparative case study and the embodied methods followed by a presentation of the results in Section~\ref{sec:results}. We discuss our findings with respect to related work in Section~\ref{sec:related} before we conclude in
Section~\ref{sec:conclusion}. <|paper_end|> | [
"<|reference_start|> Usage and perceptions of Scrum and and large-scale scrum (LeSS) in academic settings: <|reference_end|>",
"<|reference_start|> Issues in the Adoption of the Scaled Agile Framework: Agile methods were originally introduced for small sized, colocated teams. Their successful products immediately brought up the issue of adapting the methods also for large and distributed organizations engaged in projects to build major, complex products. Currently the most popular multi-teams agile method is the Scaled Agile Framework (SAFe) which, however, is subject to criticism: it appears to be quite demanding and expensive in terms of human resource and project management practices. Moreover, SAFe allegedly goes against some of the principles of agility. This research attempts to gather a deeper understanding of the matter first reviewing and analysing the studies published on this topic via a multivocal literature review and then with an extended empirical investigation on the matters that appear most controversial via the direct analysis of the work of 25 respondents from 17 different companies located in eight countries. Thus, the originality of this research is in the systemic assessment of the “level of flexibility” of SAFe, highlighting the challenges of adopting this framework as it relates to decision making, structure, and the technical and managerial competencies of the company. The results show that SAFe can be an effective and adequate approach if the company is ready to invest a significant effort and resources into it both in the form of providing time for SAFe to be properly absorbed and specific training for individuals. <|reference_end|>",
"<|reference_start|> Scaled Agile Framework: Presentation and real world example: This case focuses on the applicability of the Scaled Agile Framework (SAFe) founded by Dean Leffingwell. Modern organizations often work with agile software engineering teams using traditional single team-level methods like Scrum, but multiple teams and the program or portfolio level are not part of methods like Scrum. SAFe tries to apply agile methodologies to the whole organisation. The real world example focuses on a key element of SAFe, the Program Increment (PI) planning meeting and how it can improve multiple team collaboration. <|reference_end|>",
"<|reference_start|> Challenges and success factors for large-scale agile transformations: A systematic literature review: <|reference_end|>"
] | [
0,
1,
2,
3
] | {"<|multi_cite_2_2|>": "ss-1954405", "<|multi_cite_2_3|>": "ss-1954406", "<|multi_cite_2_4|>": "ss-1954407", "<|cite_3|>": "ss-2024139"} |
2303.04361 | <|paper_start|> Title: Sample Efficient Multimodal Semantic Augmentation for Incremental Summarization
Abstract: Sample Efficient Multimodal Semantic Augmentation for Incremental Summarization: In this work, we develop a prompting approach for incremental summarization of task videos. We develop a sample-efficient few-shot approach for extracting semantic concepts as an intermediate step. We leverage an existing model for extracting the concepts from the images and extend it to videos and introduce a clustering and querying approach for sample efficiency, motivated by the recent advances in perceiver-based architectures. Our work provides further evidence that an approach with richer input context with relevant entities and actions from the videos and using these as prompts could enhance the summaries generated by the model. We show the results on a relevant dataset and discuss possible directions for the work.
Introduction
Summarization is the consolidated format for a large document and has been widely used for many applications \ie, understanding a long meeting/event, story summarization etc. Abstractive summarization is challenging in the Natural Language Generation(NLG) domain as it requires an understanding of all the salient information in the input document and rewriting logically in a condensed manner rather than selection (extractive). Recent advancements in transformer-based abstractive summarization have shown promising attempts <|cite_start|> (Reference: A two-stage transformer-based approach for variable-length abstractive summarization: This study proposes a two-stage method for variable-length abstractive summarization. This is an improvement over previous models, in that the proposed approach can simultaneously achieve fluent and variable-length abstractive summarization. The proposed abstractive summarization model consists of a text segmentation module and a two-stage Transformer-based summarization module. First, the text segmentation module utilizes a pre-trained Bidirectional Encoder Representations from Transformers (BERT) and a bidirectional long short-term memory (LSTM) to divide the input text into segments. An extractive model based on the BERT-based summarization model (BERTSUM) is then constructed to extract the most important sentence from each segment. For training the two-stage summarization model, first, the extracted sentences are used to train the document summarization module in the second stage. Next, the segments are used to train the segment summarization module in the first stage by simultaneously considering the outputs of the segment summarization module and the pre-trained second-stage document summarization module. The parameters of the segment summarization module are updated by considering the loss scores of the document summarization module as well as the segment summarization module. Finally, collaborative training is applied to alternately train the segment summarization module and the document summarization module until convergence. For testing, the outputs of the segment summarization module are concatenated to provide the variable-length abstractive summarization result. For evaluation, the BERT-biLSTM-based text segmentation model is evaluated using ChWiki_181k database and obtains a good effect in capturing the relationship between sentences. Finally, the proposed variable-length abstractive summarization system achieved a maximum of 70.0% accuracy in human subjective evaluation on the LCSTS dataset.) <|cite_end|> <|cite_start|> (Reference: Efficient Adaptation of Pretrained Transformers for Abstractive Summarization: Large-scale learning of transformer language models has yielded improvements on a variety of natural language understanding tasks. Whether they can be effectively adapted for summarization, however, has been less explored, as the learned representations are less seamlessly integrated into existing neural text production architectures. In this work, we propose two solutions for efficiently adapting pretrained transformer language models as text summarizers: source embeddings and domain-adaptive training. We test these solutions on three abstractive summarization datasets, achieving new state of the art performance on two of them. Finally, we show that these improvements are achieved by producing more focused summaries with fewer superfluous and that performance improvements are more pronounced on more abstractive datasets.) <|cite_end|> <|cite_start|> (Reference: Friendly Topic Assistant for Transformer Based Abstractive Summarization: Abstractive document summarization is a comprehensive task including document understanding and summary generation, in which area Transformer-based models have achieved the state-of-the-art performance. Compared with Transformers, topic models are better at learning explicit document semantics, and hence could be integrated into Transformers to further boost their performance. To this end, we rearrange and explore the semantics learned by a topic model, and then propose a topic assistant (TA) including three modules. TA is compatible with various Transformer-based models and user-friendly since i) TA is a plug-and-play model that does not break any structure of the original Transformer network, making users easily fine-tune Transformer+TA based on a well pre-trained model; ii) TA only introduces a small number of extra parameters. Experimental results on three datasets demonstrate that TA is able to improve the performance of several Transformer-based models.) <|cite_end|> with ideas ranging from the two-stage method,domain-adaptive training to plug and play topic models on top of the transformer. Despite these strong advancements in text-based summarization, there is a huge potential for how we can improve summarization from multimodal data. Since in real-time, data prevails in different modes rather than a single mode like text, there has been an increasing demand for how we can bridge the gap between these modalities \ie, cross-modal search applications for video, utilize the text data associated with the video to search for relevant video content <|cite_start|> (Reference: Learning Joint Representations of Videos and Sentences with Web Image Search: Our objective is video retrieval based on natural language queries. In addition, we consider the analogous problem of retrieving sentences or generating descriptions given an input video. Recent work has addressed the problem by embedding visual and textual inputs into a common space where semantic similarities correlate to distances. We also adopt the embedding approach, and make the following contributions: First, we utilize web image search in sentence embedding process to disambiguate fine-grained visual concepts. Second, we propose embedding models for sentence, image, and video inputs whose parameters are learned simultaneously. Finally, we show how the proposed model can be applied to description generation. Overall, we observe a clear improvement over the state-of-the-art methods in the video and sentence retrieval tasks. In description generation, the performance level is comparable to the current state-of-the-art, although our embeddings were trained for the retrieval tasks.) <|cite_end|> <|cite_start|> (Reference: Multiple feature hashing for real-time large scale near-duplicate video retrieval: Near-duplicate video retrieval (NDVR) has recently attracted lots of research attention due to the exponential growth of online videos. It helps in many areas, such as copyright protection, video tagging, online video usage monitoring, etc. Most of existing approaches use only a single feature to represent a video for NDVR. However, a single feature is often insufficient to characterize the video content. Besides, while the accuracy is the main concern in previous literatures, the scalability of NDVR algorithms for large scale video datasets has been rarely addressed. In this paper, we present a novel approach - Multiple Feature Hashing (MFH) to tackle both the accuracy and the scalability issues of NDVR. MFH preserves the local structure information of each individual feature and also globally consider the local structures for all the features to learn a group of hash functions which map the video keyframes into the Hamming space and generate a series of binary codes to represent the video dataset. We evaluate our approach on a public video dataset and a large scale video dataset consisting of 132,647 videos, which was collected from YouTube by ourselves. The experiment results show that the proposed method outperforms the state-of-the-art techniques in both accuracy and efficiency.) <|cite_end|>, which requires a complete understanding of the video without ignoring the subtle differences <|cite_start|> (Reference: Event driven web video summarization by tag localization and key-shot identification: With the explosive growth of web videos on the Internet, it becomes challenging to efficiently browse hundreds or even thousands of videos. When searching an event query, users are often bewildered by the vast quantity of web videos returned by search engines. Exploring such results will be time consuming and it will also degrade user experience. In this paper, we present an approach for event driven web video summarization by tag localization and key-shot mining. We first localize the tags that are associated with each video into its shots. Then, we estimate the relevance of the shots with respect to the event query by matching the shot-level tags with the query. After that, we identify a set of key-shots from the shots that have high relevance scores by exploring the repeated occurrence characteristic of key sub-events. Following the scheme in [6] and [22], we provide two types of summaries, i.e., threaded video skimming and visual-textual storyboard. Experiments are conducted on a corpus that contains 60 queries and more than 10 000 web videos. The evaluation demonstrates the effectiveness of the proposed approach.) <|cite_end|>. Recent work <|cite_start|> (Reference: Multimodal Speech Summarization Through Semantic Concept Learning.: We propose a cascaded multimodal abstractive speech summarization model that generates semantic concepts as an intermediate step towards summarization. We describe a method to leverage existing multimodal dataset annotations to curate groundtruth labels for such intermediate concept modeling. In addition to cascaded training, the concept labels also provide an interpretable intermediate output level that helps improve performance on the downstream summarization task. On the open-domain How2 data, we conduct utterance-level and video-level experiments for two granularities of concepts: Specific and Abstract. We compare various multimodal fusion models for concept generation based on the respective input modalities. We observe consistent improvements in concept modeling by using multimodal adaptation models over unimodal models. Using the cascaded multimodal speech summarization model, we see a significant improvement of 7.5 METEOR points and 5.1 ROUGE-L points compared to previous methods of speech summarization. Finally, we show the benefits of scalability of the proposed approaches on 2000h of video data.) <|cite_end|> suggests that learning a semantic concept as an intermediate step can help the model to learn efficiently. Learning a semantic concept has always been beneficial in categorization tasks like scene recognition, video tagging, etc \cite {zhou2017places,ghadiyaram2019large}.
\par
Recent advancements in the vision-language-based models <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|> <|cite_start|> (Reference: Flamingo: a Visual Language Model for Few-Shot Learning: Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. We propose key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of our models, exploring and measuring their ability to rapidly adapt to a variety of image and video tasks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer; captioning tasks, which evaluate the ability to describe a scene or an event; and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data.) <|cite_end|> have shown immense potential for generating text-based descriptions from images/videos. In our context, we refer to these text-based descriptions as "semantic concepts". Our work utilizes learning of these semantic concepts as an intermediate step from the videos. These semantic concepts along with the transcriptions (semantic augmentation) as input to a pre-trained summarizer model enrich the performance. In this work, we address the problem of (\rom{1}) generating semantically relevant annotations of a video (semantic concepts) using a fixed number of sampled frames from each video segment. (\rom{2}) utilize these semantic concepts along with input transcription (semantic augmentation) to enrich the summarization output of pre-trained models.
(\ie BART).
In summary, Our contributions are the following:
\begin{itemize}
\item We propose a novel CLIP-based approach <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|> to generate semantic concepts from video frames.
\item In order to maintain diversity in each batch, we propose a clustering-based batch creation approach.
\item We have experimented with our proposed approach using the YOUCOOK2 <|cite_start|> (Reference: Towards Automatic Learning of Procedures from Web Instructional Videos: The potential for agents, whether embodied or software, to learn by observing other agents performing procedures involving objects and actions is rich. Current research on automatic procedure learning heavily relies on action labels or video subtitles, even during the evaluation phase, which makes them infeasible in real-world scenarios. This leads to our question: can the human-consensus structure of a procedure be learned from a large set of long, unconstrained videos (e.g., instructional videos from YouTube) with only visual evidence? To answer this question, we introduce the problem of procedure segmentation--to segment a video procedure into category-independent procedure segments. Given that no large-scale dataset is available for this problem, we collect a large-scale procedure segmentation dataset with procedure segments temporally localized and described; we use cooking videos and name the dataset YouCook2. We propose a segment-level recurrent network for generating procedure segments by modeling the dependencies across segments. The generated segments can be used as pre-processing for other tasks, such as dense video captioning and event parsing. We show in our experiments that the proposed model outperforms competitive baselines in procedure segmentation.) <|cite_end|> dataset. The results perfectly demonstrate the efficiency of our approach.
\end{itemize}{
}
\begin{figure*}[h!]
\centering
\includegraphics[width=\linewidth]{figures/summarization_fig.pdf}
\caption{Shows the architecture of the system and the use of semantic augmentation for summarization.}
\end{figure*}
\par <|paper_end|> | [
"<|reference_start|> Learning Joint Representations of Videos and Sentences with Web Image Search: Our objective is video retrieval based on natural language queries. In addition, we consider the analogous problem of retrieving sentences or generating descriptions given an input video. Recent work has addressed the problem by embedding visual and textual inputs into a common space where semantic similarities correlate to distances. We also adopt the embedding approach, and make the following contributions: First, we utilize web image search in sentence embedding process to disambiguate fine-grained visual concepts. Second, we propose embedding models for sentence, image, and video inputs whose parameters are learned simultaneously. Finally, we show how the proposed model can be applied to description generation. Overall, we observe a clear improvement over the state-of-the-art methods in the video and sentence retrieval tasks. In description generation, the performance level is comparable to the current state-of-the-art, although our embeddings were trained for the retrieval tasks. <|reference_end|>",
"<|reference_start|> Multiple feature hashing for real-time large scale near-duplicate video retrieval: Near-duplicate video retrieval (NDVR) has recently attracted lots of research attention due to the exponential growth of online videos. It helps in many areas, such as copyright protection, video tagging, online video usage monitoring, etc. Most of existing approaches use only a single feature to represent a video for NDVR. However, a single feature is often insufficient to characterize the video content. Besides, while the accuracy is the main concern in previous literatures, the scalability of NDVR algorithms for large scale video datasets has been rarely addressed. In this paper, we present a novel approach - Multiple Feature Hashing (MFH) to tackle both the accuracy and the scalability issues of NDVR. MFH preserves the local structure information of each individual feature and also globally consider the local structures for all the features to learn a group of hash functions which map the video keyframes into the Hamming space and generate a series of binary codes to represent the video dataset. We evaluate our approach on a public video dataset and a large scale video dataset consisting of 132,647 videos, which was collected from YouTube by ourselves. The experiment results show that the proposed method outperforms the state-of-the-art techniques in both accuracy and efficiency. <|reference_end|>",
"<|reference_start|> Multimodal Speech Summarization Through Semantic Concept Learning.: We propose a cascaded multimodal abstractive speech summarization model that generates semantic concepts as an intermediate step towards summarization. We describe a method to leverage existing multimodal dataset annotations to curate groundtruth labels for such intermediate concept modeling. In addition to cascaded training, the concept labels also provide an interpretable intermediate output level that helps improve performance on the downstream summarization task. On the open-domain How2 data, we conduct utterance-level and video-level experiments for two granularities of concepts: Specific and Abstract. We compare various multimodal fusion models for concept generation based on the respective input modalities. We observe consistent improvements in concept modeling by using multimodal adaptation models over unimodal models. Using the cascaded multimodal speech summarization model, we see a significant improvement of 7.5 METEOR points and 5.1 ROUGE-L points compared to previous methods of speech summarization. Finally, we show the benefits of scalability of the proposed approaches on 2000h of video data. <|reference_end|>",
"<|reference_start|> Flamingo: a Visual Language Model for Few-Shot Learning: Building models that can be rapidly adapted to novel tasks using only a handful of annotated examples is an open challenge for multimodal machine learning research. We introduce Flamingo, a family of Visual Language Models (VLM) with this ability. We propose key architectural innovations to: (i) bridge powerful pretrained vision-only and language-only models, (ii) handle sequences of arbitrarily interleaved visual and textual data, and (iii) seamlessly ingest images or videos as inputs. Thanks to their flexibility, Flamingo models can be trained on large-scale multimodal web corpora containing arbitrarily interleaved text and images, which is key to endow them with in-context few-shot learning capabilities. We perform a thorough evaluation of our models, exploring and measuring their ability to rapidly adapt to a variety of image and video tasks. These include open-ended tasks such as visual question-answering, where the model is prompted with a question which it has to answer; captioning tasks, which evaluate the ability to describe a scene or an event; and close-ended tasks such as multiple-choice visual question-answering. For tasks lying anywhere on this spectrum, a single Flamingo model can achieve a new state of the art with few-shot learning, simply by prompting the model with task-specific examples. On numerous benchmarks, Flamingo outperforms models fine-tuned on thousands of times more task-specific data. <|reference_end|>"
] | [
3,
4,
6,
8
] | {"<|multi_cite_1_1|>": "ss-1860209", "<|multi_cite_1_2|>": "arxiv-207329", "<|multi_cite_1_3|>": "ss-989613", "<|multi_cite_2_1|>": "arxiv-103568", "<|multi_cite_2_2|>": "ss-2144361", "<|cite_3|>": "ss-1014916", "<|cite_4|>": "ss-931446", "<|multi_cite_5_1|>": "arxiv-323919", "<|multi_cite_5_2|>": "arxiv-416418", "<|cite_6|>": "arxiv-323919", "<|cite_7|>": "arxiv-120255"} |
2203.12900 | <|paper_start|> Title: Two-timescale Resource Allocation for Automated Networks in IIoT
Abstract: Two-timescale Resource Allocation for Automated Networks in IIoT: The rapid technological advances of cellular technologies will revolutionize network automation in industrial internet of things (IIoT). In this paper, we investigate the two-timescale resource allocation problem in IIoT networks with hybrid energy supply, where temporal variations of energy harvesting (EH), electricity price, channel state, and data arrival exhibit different granularity. The formulated problem consists of energy management at a large timescale, as well as rate control, channel selection, and power allocation at a small timescale. To address this challenge, we develop an online solution to guarantee bounded performance deviation with only causal information. Specifically, Lyapunov optimization is leveraged to transform the long-term stochastic optimization problem into a series of short-term deterministic optimization problems. Then, a low-complexity rate control algorithm is developed based on alternating direction method of multipliers (ADMM), which accelerates the convergence speed via the decomposition-coordination approach. Next, the joint channel selection and power allocation problem is transformed into a one-to-many matching problem, and solved by the proposed price-based matching with quota restriction. Finally, the proposed algorithm is verified through simulations under various system configurations.
Introduction
\label{intro}
\subsection{Background and Motivation}
\IEEEPARstart{A}utomated networks rely on seamless integration of advanced self-optimized techniques to improve efficiency, reliability, and operation economics for industrial internet of things (IIoT) applications <|cite_start|> (Reference: Robust mobile crowd sensing: When deep learning meets edge computing: The emergence of MCS technologies provides a cost-efficient solution to accommodate large-scale sensing tasks. However, despite the potential benefits of MCS, there are several critical issues that remain to be solved, such as lack of incentive-compatible mechanisms for recruiting participants, lack of data validation, and high traffic load and latency. This motivates us to develop robust mobile crowd sensing (RMCS), a framework that integrates deep learning based data validation and edge computing based local processing. First, we present a comprehensive state-of-the-art literature review. Then, the conceptual design architecture of RMCS and practical implementations are described in detail. Next, a case study of smart transportation is provided to demonstrate the feasibility of the proposed RMCS framework. Finally, we identify several open issues and conclude the article.) <|cite_end|>. Fifth-generation (5G) cellular technologies provide more resilient network infrastructure for connecting massive IIoT devices. However, carbon dioxide generated by powering cellular infrastructures puts tremendous pressure on the sustainability of 5G-empowered IIoT networks. Faced with the urgent need of green cellular networks, researchers have focused on energy-saving strategies on both data transmission side and energy supply side.
On data transmission side, network sleeping <|cite_start|> (Reference: {Small Cell Base Station Sleep Strategies for Energy Efficiency: Small cell networks offer a promising and viable approach to meeting the increasing demand for high-data-rate wireless applications. With the expected increase in the number of small cell deployments, energy efficiency (EE) is a crucial system design parameter that demands consideration from an eco-sustainability perspective. One way to improve EE is to switch off small cell base stations (BSs) or to keep them in energy-saving mode while preserving the quality of service (QoS) experienced by users. With “bits/joule” as the metric, we aim to optimize EE with the introduction of several levels of sleep depths. Using a stochastic geometry-based heterogeneous cellular network (HCN) model, we derive coverage probability, average achievable rate, and EE in heterogeneous K-tier wireless networks with different sleep modes for small cells. Then, we try to maximize EE under 1) a random sleeping policy and 2) a strategic sleeping policy, with constraints on both coverage probability and wake-up times. Due to the nonconvexity of EE, we propose an alternative low-complexity near-optimal solution by maximizing the lower bound of EE. We use an alternating iterative approach to solve the resulting multivariable optimization problem. Simulation results confirm the effectiveness of the scheme. With improvements of approximately 30% in EE with random sleeping policy, simulation indicates that instantaneous EE can be further improved by 15% with a strategic sleeping policy.) <|cite_end|> and energy-efficient resource allocation techniques <|cite_start|> (Reference: Time{-: The time dependent mixing of B 0 d (cid:0) (cid:22) B 0 d mesons has been observed by using the correlations between the charge of D (cid:3) mesons and the weighted mean charge of particles in each hemisphere. From a reconstructed D (cid:3)(cid:6) sample corresponding to about 1.7 million hadronic Z 0 decays, the mass di(cid:11)erence between the two B 0 d mass eigenstates has been measured to be or, converting into eV=c 2 :) <|cite_end|> are widely mentioned, applied, and continuously improved. On energy supply side, harvesting renewable energy such as solar and wind energy is advocated to power base stations (BSs) <|cite_start|> (Reference: Energy Cooperation in Cellular Networks with Renewable Powered Base Stations: In this paper, we propose a model for energy cooperation between cellular base stations (BSs) with individual hybrid power supplies (including both the conventional grid and renewable energy sources), limited energy storages, and connected by resistive power lines for energy sharing. When the renewable energy profile and energy demand profile at all BSs are deterministic or known ahead of time, we show that the optimal energy cooperation policy for the BSs can be found by solving a linear program. We show the benefits of energy cooperation in this regime. When the renewable energy and demand profiles are stochastic and only causally known at the BSs, we propose an online energy cooperation algorithm and show the optimality properties of this algorithm under certain conditions. Furthermore, the energy-saving performances of the developed offline and online algorithms are compared by simulations, and the effect of the availability of energy state information (ESI) on the performance gains of the BSs' energy cooperation is investigated. Finally, we propose a hybrid algorithm that can incorporate offline information about the energy profiles, but operates in an online manner.) <|cite_end|>. However, renewable energy sources with intermittent and fluctuating characteristics have a large impact on reliable BS operation, which may further affect quality of service (QoS) guarantees. A more feasible approach is to utilize both unreliable renewable energy sources and reliable grid power in a complementary manner <|cite_start|> (Reference: Backhaul aware joint uplink and downlink user association for delay‐power trade‐offs in HetNets with hybrid energy sources: In cellular networks, conventional user association algorithms are solely based on downlink (DL) performance, which may lead to inefficient transmission power and high interference in the uplink (UL) transmission. In addition, the backhaul data rate constraint has been neglected by the majority of the existing user association algorithms. However, the backhaul constraint has become more severe in heterogeneous networks (HetNets) where small cells are densely deployed to meet the skyrocketing data traffic demand. In this paper, we propose an optimal backhaul‐aware joint UL and DL user association for delay‐power trade‐offs in HetNets with hybrid energy sources. In the considered HetNets, all the base stations are assumed to be powered by a combination of power grid and renewable energy sources, in order to achieve both uninterrupted and green communications. Taking both UL and DL transmissions into consideration, the proposed user association algorithm aims to improve network quality of service by minimising the sum of UL and DL average traffic delay, as well as to reduce the overall UL power consumption of users and DL on‐grid power consumption by maximising the utilisation of green power harvested from renewable energy sources. To this end, a convex optimisation problem is formulated to minimise the weighted sum of cost of average traffic delay and cost of power consumption. We have proved that the proposed user association algorithm converges to the global optimum, which enables a flexible trade‐off between average traffic delay and power consumption. Simulation results validate the effectiveness of the proposed algorithm in adapting the traffic loads among base stations along with the distribution of green power and the backhaul data rate constraint. Simulation results also demonstrate that the proposed user association algorithm achieves prominent improvement in UL average traffic delay reduction and effectively reduces both the DL on‐grid power consumption and overall UL power consumption of users, with limited sacrifice on DL average traffic delay, compared with the user association algorithm only based on DL performance. Copyright © 2015 John Wiley & Sons, Ltd.) <|cite_end|> <|cite_start|> (Reference: Power allocation for an energy harvesting transmitter with hybrid energy sources: In this work, we consider a point-to-point communication link where the transmitter has a hybrid supply of energy. Specifically, the hybrid energy is supplied by a constant energy source and an energy harvester, which harvests energy from its surrounding environment and stores it in a battery which suffers from energy leakage. Our goal is to minimize the power consumed by the constant energy source for transmission of a given amount of data in a given number of time intervals. Two scenarios are considered for packet arrival. In the first scenario, we assume that all data packets have arrived before transmission begins, whereas in the second scenario, we assume that data packets are arriving during the course of data transmission. For both scenarios, we propose an optimal offline transmit power allocation scheme which provides insight into how to efficiently consume the energy supplied by the constant energy source and the energy harvester. For offline power allocation, we assume that causal and non-causal information regarding the channel and the amount of harvested energy is available a priori. For optimal online power allocation, we adopt a stochastic dynamic programming (DP) approach for both considered scenarios. For online power allocation, only causal information regarding the channel and the amount of harvested energy is assumed available. Due to the inherent high complexity of DP, we propose suboptimal online algorithms which are appealing because of their low complexity. Simulation results reveal that the offline scheme performs best among all considered schemes and the suboptimal online scheme provides a good performance-complexity tradeoff.) <|cite_end|>. In this sense, the coexistence of various energy sources further complicates resource allocation in 5G-empowered IIoT networks. There exist several challenges that remain unsolved.
{First, energy resource allocation and communication resource allocation are intertwined with each other, and the joint optimization problem is NP-hard due to the coupling between energy and communication domains. Second, energy resource allocation and communication resource allocation have different granularities. Generally, energy domain information such as energy harvesting (EH) and electricity price changes in a large timescale such as minutes <|cite_start|> (Reference: Decentralized Coordination of Energy Utilization for
Residential Households in the Smart Grid: In this paper, we investigate the minimization of the total energy cost of multiple residential households in a smart grid neighborhood sharing a load serving entity. Specifically, each household may have renewable generation, energy storage as well as inelastic and elastic energy loads, and the load serving entity attempts to coordinate the energy consumption of these households in order to minimize the total energy cost within this neighborhood. The renewable generation, the energy demand arrival, and the energy cost function are all stochastic processes and evolve according to some, possibly unknown, probabilistic laws. We develop an online control algorithm, called Lyapunov-based cost minimization algorithm (LCMA), which jointly considers the energy management and demand management decisions. LCMA only needs to keep track of the current values of the underlying stochastic processes without requiring any knowledge of their statistics. Moreover, a decentralized algorithm to implement LCMA is also developed, which can preserve the privacy of individual household owners. Numerical results based on real-world trace data show that our control algorithm can effectively reduce the total energy cost in the neighborhood.) <|cite_end|>, while communication domain information such as channel state and data arrival changes in a small timescale such as seconds or even milliseconds <|cite_start|> (Reference: Dynamic Spectrum Access in Multi-Channel Cognitive Radio Networks: In this paper, dynamic spectrum access (DSA) in multi-channel cognitive radio networks (CRNs) is studied. The two fundamental issues in DSA, spectrum sensing and spectrum sharing, for a general scenario are revisited, where the channels present different usage characteristics and the detection performance of individual secondary users (SUs) varies. First, spectrum sensing is investigated, where multiple SUs are coordinated to cooperatively sense the channels owned by the primary users (PUs) for different interests. When the PUs' interests are concerned, cooperative spectrum sensing is performed to better protect the PUs while satisfying the SUs' requirement on the expected access time. For the SUs' interests, the objective is to maximize the expected available time while keeping the interference to PUs under a predefined level. With the dynamics in the channel usage characteristics and the detection capacities, the coordination problems for the above two cases are formulated as nonlinear integer programming problems accordingly, which are proved to be NP-complete. To find the solution efficiently, for the former case, the original problem is transformed into a variant of convex bipartite matching problem by constructing a complete bipartite graph and defining proper weight vectors. Based on the problem transformation, a channel selection algorithm is proposed to compute the solution. For the latter case, the deterministic optimization problem is first transformed to an associated stochastic optimization problem, which is then solved by cross-entropy (CE) method of stochastic optimization. Then, the sharing of the available channels by SUs after sensing is modeled by a channel access game, based on the framework of weighted congestion game. An algorithm for SUs to select access channels to achieve Nash equilibrium (NE) is proposed. Simulation results are presented to validate the performance of the proposed algorithms.) <|cite_end|>. Third, communication resource allocation with long-term constraint involves the coupling among different time slots as well as the coupling between different layers, e.g., rate control in the network layer and power allocation in the physical layer. Existing works on either single-layer performance or short-term deterministic optimization cannot be applied. Last but not least, the large-scale deployment of IIoT devices brings complexity issues. Compared with mobile devices and applications, IIoT devices are usually constrained by limited physical space, energy, communication and computing resources, and IIoT applications have stringent requirements on operation delay and reliability. Therefore, it is important to reduce complexity to cope with numerous implementation constraints and strict operation demands.}
The joint optimization of energy and communication resource allocation in renewable energy powered cellular networks has attracted intensive attentions <|cite_start|> (Reference: Optimal Energy Allocation for Wireless Communications with Energy Harvesting Constraints: We consider the use of energy harvesters, in place of conventional batteries with fixed energy storage, for point-to-point wireless communications. In addition to the challenge of transmitting in a channel with time selective fading, energy harvesters provide a perpetual but unreliable energy source. In this paper, we consider the problem of energy allocation over a finite horizon, taking into account channel conditions and energy sources that are time varying, so as to maximize the throughput. Two types of side information (SI) on the channel conditions and harvested energy are assumed to be available: causal SI (of the past and present slots) or full SI (of the past, present and future slots). We obtain structural results for the optimal energy allocation, via the use of dynamic programming and convex optimization techniques. In particular, if unlimited energy can be stored in the battery with harvested energy and the full SI is available, we prove the optimality of a water-filling energy allocation solution where the so-called water levels follow a staircase function.) <|cite_end|> <|cite_start|> (Reference: Energy-Aware Traffic Offloading for Green Heterogeneous Networks: With small cell base stations (SBSs) densely deployed in addition to conventional macro base stations (MBSs), the heterogeneous cellular network (HCN) architecture can effectively boost network capacity. To support the huge power demand of HCNs, renewable energy harvesting technologies can be leveraged. In this paper, we aim to make efficient use of the harvested energy for on-grid power saving while satisfying the quality of service (QoS) requirement. To this end, energy-aware traffic offloading schemes are proposed, whereby user associations, ON-OFF states of SBSs, and power control are jointly optimized according to the statistical information of energy arrival and traffic load. Specifically, for the single SBS case, the power saving gain achieved by activating the SBS is derived in closed form, based on which the SBS activation condition and optimal traffic offloading amount are obtained. Furthermore, a two-stage energy-aware traffic offloading (TEATO) scheme is proposed for the multiple-SBS case, considering various operating characteristics of SBSs with different power sources. Simulation results demonstrate that the proposed scheme can achieve more than 50% power saving gain for typical daily traffic and solar energy profiles, compared with the conventional traffic offloading schemes.) <|cite_end|> <|cite_start|> (Reference: Optimizing energy efficiency over energy-harvesting LTE cellular networks: We consider the problem of downlink scheduling in an LTE network powered by energy harvesting devices. We formulate optimization problems that seek to optimize two popular energy efficiency metrics subject to mandatory LTE network constraints along with energy harvesting causality constraints. We identify a key sub-problem pertaining to maximizing the weighted sum rate that is common for both optimization problems, and is also of independent interest. We show that the latter sub-problem can be reformulated as a constrained submodular set function maximization problem. This enables us to design constant-factor approximation algorithms for maximizing the weighted sum rate as well as the two energy efficiency metrics over an energy harvesting LTE downlink. Our proposed algorithms are simple to implement and offer superior performance.) <|cite_end|>. Nevertheless, these researches mainly target at single-timescale resource allocation. There are some works taking different time granularities into consideration. In <|cite_start|> (Reference: On the Time Scales of Energy Arrival and Channel Fading in Energy Harvesting Communications: In wireless communication systems powered by harvested energy, besides the channel fading, there exists another dimension of dynamics, i.e., energy arrival variation. In this paper, we propose a framework for analyzing the energy harvesting powered wireless transmissions where the energy arrival variations and the channel fading are of different timescales. The energy arrival rate changes every <inline-formula> <tex-math notation="LaTeX">${N (N \ge 1)}$ </tex-math></inline-formula> time slots, and the channel state changes every <inline-formula> <tex-math notation="LaTeX">${M (M \ge 1)}$ </tex-math></inline-formula> slots. We consider a power allocation problem among the time slots, which can be formulated as a Markov decision process and solved by dynamic programming (DP) algorithm. For the special case that <inline-formula> <tex-math notation="LaTeX">${M=1}$ </tex-math></inline-formula>, a low-complexity two-stage DP algorithm is proposed, which decouples the original problem into inner and outer sub-problems. The inner problem deals with the power allocation in channel fading timescale in every <inline-formula> <tex-math notation="LaTeX">${N}$ </tex-math></inline-formula> slots where the energy arrival rate keeps constant, and the outer problem deals with the energy management when the energy arrival rate changes. Numerical simulations show that the average data rate decreases as <inline-formula> <tex-math notation="LaTeX">${N}$ </tex-math></inline-formula> or <inline-formula> <tex-math notation="LaTeX">${M}$ </tex-math></inline-formula> increases, and the two-stage DP algorithm can perform close to the DP optimal algorithm.) <|cite_end|>, Gong \emph{et al.} studied the timescale difference between energy arrival variation and channel fading, and proposed a low-complexity two-stage joint power allocation and energy management optimization algorithm based on Markov decision process (MDP) and dynamic programming. In <|cite_start|> (Reference: Two-Dimensional Optimization on User Association and Green Energy Allocation for HetNets With Hybrid Energy Sources: In green communications, it is imperative to reduce the total on-grid energy consumption as well as minimize the peak on-grid energy consumption, since the large peak on-grid energy consumption will translate into the high operational expenditure (OPEX) for mobile network operators. In this paper, we consider the two-dimensional optimization to lexicographically minimize the on-grid energy consumption in heterogeneous networks (HetNets). All the base stations (BSs) therein are envisioned to be powered by both power grid and renewable energy sources, and the harvested energy can be stored in rechargeable batteries. The lexicographic minimization of on-grid energy consumption involves the optimization in both the space and time dimensions, due to the temporal and spatial dynamics of mobile traffic and green energy generation. The reasonable assumption of time scale separation allows us to decompose the problem into two sub-optimization problems without loss of optimality of the original optimization problem. We first formulate the user association optimization in space dimension via convex optimization to minimize total energy consumption through distributing the traffic across different BSs appropriately in a certain time slot. We then optimize the green energy allocation across different time slots for an individual BS to lexicographically minimize the on-grid energy consumption. To solve the optimization problem, we propose a low complexity optimal offline algorithm with infinite battery capacity by assuming non-causal green energy and traffic information. The proposed optimal offline algorithm serves as performance upper bound for evaluating practical online algorithms. We further develop some heuristic online algorithms with finite battery capacity which require only causal green energy and traffic information. The performance of the proposed optimal offline and online algorithms is evaluated by simulations.) <|cite_end|>, Liu \emph{et al.} investigated the minimization of on-grid energy consumption from both the space and time dimensions, and developed a low-complexity offline algorithm based on non-causal information as well as several heuristic online algorithms based on only causal information. However, both <|cite_start|> (Reference: On the Time Scales of Energy Arrival and Channel Fading in Energy Harvesting Communications: In wireless communication systems powered by harvested energy, besides the channel fading, there exists another dimension of dynamics, i.e., energy arrival variation. In this paper, we propose a framework for analyzing the energy harvesting powered wireless transmissions where the energy arrival variations and the channel fading are of different timescales. The energy arrival rate changes every <inline-formula> <tex-math notation="LaTeX">${N (N \ge 1)}$ </tex-math></inline-formula> time slots, and the channel state changes every <inline-formula> <tex-math notation="LaTeX">${M (M \ge 1)}$ </tex-math></inline-formula> slots. We consider a power allocation problem among the time slots, which can be formulated as a Markov decision process and solved by dynamic programming (DP) algorithm. For the special case that <inline-formula> <tex-math notation="LaTeX">${M=1}$ </tex-math></inline-formula>, a low-complexity two-stage DP algorithm is proposed, which decouples the original problem into inner and outer sub-problems. The inner problem deals with the power allocation in channel fading timescale in every <inline-formula> <tex-math notation="LaTeX">${N}$ </tex-math></inline-formula> slots where the energy arrival rate keeps constant, and the outer problem deals with the energy management when the energy arrival rate changes. Numerical simulations show that the average data rate decreases as <inline-formula> <tex-math notation="LaTeX">${N}$ </tex-math></inline-formula> or <inline-formula> <tex-math notation="LaTeX">${M}$ </tex-math></inline-formula> increases, and the two-stage DP algorithm can perform close to the DP optimal algorithm.) <|cite_end|> and <|cite_start|> (Reference: Two-Dimensional Optimization on User Association and Green Energy Allocation for HetNets With Hybrid Energy Sources: In green communications, it is imperative to reduce the total on-grid energy consumption as well as minimize the peak on-grid energy consumption, since the large peak on-grid energy consumption will translate into the high operational expenditure (OPEX) for mobile network operators. In this paper, we consider the two-dimensional optimization to lexicographically minimize the on-grid energy consumption in heterogeneous networks (HetNets). All the base stations (BSs) therein are envisioned to be powered by both power grid and renewable energy sources, and the harvested energy can be stored in rechargeable batteries. The lexicographic minimization of on-grid energy consumption involves the optimization in both the space and time dimensions, due to the temporal and spatial dynamics of mobile traffic and green energy generation. The reasonable assumption of time scale separation allows us to decompose the problem into two sub-optimization problems without loss of optimality of the original optimization problem. We first formulate the user association optimization in space dimension via convex optimization to minimize total energy consumption through distributing the traffic across different BSs appropriately in a certain time slot. We then optimize the green energy allocation across different time slots for an individual BS to lexicographically minimize the on-grid energy consumption. To solve the optimization problem, we propose a low complexity optimal offline algorithm with infinite battery capacity by assuming non-causal green energy and traffic information. The proposed optimal offline algorithm serves as performance upper bound for evaluating practical online algorithms. We further develop some heuristic online algorithms with finite battery capacity which require only causal green energy and traffic information. The performance of the proposed optimal offline and online algorithms is evaluated by simulations.) <|cite_end|> rely on the assumption that the uncertainties follow some well-known probability distributions such as Poisson distribution. They are not suitable for the scenario where the practical probability distributions disagree with the pre-assumed statistical models.
{To facilitate joint optimization of energy and communication resource allocation under distribution free models, Lyapunov optimization has been widely used to provide bounded performance guarantees of resource allocation under all possible realizations of uncertainties <|cite_start|> (Reference: Cocycles and Lyapunov Exponents: ) <|cite_end|>. It has been applied in wireless networks <|cite_start|> (Reference: Adaptive Resource Allocation Algorithm of Lyapunov Optimization for Time-Varying Wireless Networks: In this letter, we investigate the adaptive resource allocation with Lyapunov optimization for time-varying wireless networks. The transmit power minimization is characterized by a stochastic optimization model. A dynamic resource allocation (DRA) algorithm is developed to accommodate the wireless network dynamics, i.e., time-varying wireless channels and as well the queuing dynamics. We investigate the tracking error between the DRA algorithm output and the target optimal resource allocation solution. Based on these results, we further develop an adaptive-compensation resource allocation (ACRA) algorithm, which iterates only once when the network state changes for saving the huge iteration overheads. Finally, we determine a sufficient condition that the ACRA algorithm asymptotically tracks the moving equilibrium point with no tracking errors. Simulation results validate the theoretical analysis of our proposed scheme.) <|cite_end|>, hybrid energy powered cellular networks <|cite_start|> (Reference: Cocycles and Lyapunov Exponents: ) <|cite_end|>, and relay cooperative networks <|cite_start|> (Reference: {Lyapunov: 비선형 동력학 시스템으로 구성된 전력 수요의 시계열 데이터를 예측하기 위해 적용된 신경망 및 퍼지 적응 알고리즘 등은 예측오차가 상대적으로 크게 나타났다. 이는 전력수요 시계열 데이터가 가지고 있는 카오스적인 성질에 기인하며 이중 초기값에 민감한 의존성은 장기적인 예측을 더욱더 어렵게 하는 요인으로 작용한다. 전력수요 시계열 데이터가 가지고 있는 카오스적인 성질을 정량 및 정성적인 방식으로 분석을 수행하고, 시스템 동력학적 특성의 정량분석에 이용되는 Lyapunov 지수를 이용하여 시계열 데이터의 예측 시뮬레이션을 수행하고 예측기간과 오차간의 관계를 분석하였다.) <|cite_end|>, etc. Nevertheless, the above-mentioned works mainly focus on one-timescale stochastic models, and cannot be directly applied to solve the two-timescale resource allocation problem addressed in this paper. Moreover, they cannot well handle the large-scale resource allocation problem with massive IIoT devices. Alternating direction method of multipliers (ADMM) enables low-complexity optimization <|cite_start|> (Reference: Distributed Energy Management for Multiuser Mobile-Edge Computing Systems With Energy Harvesting Devices and QoS Constraints: Mobile-edge computing (MEC) has evolved as a promising technology to alleviate the computing pressure of mobile devices by offloading computation tasks to MEC server. Energy management is challenging since the unpredictability of the energy harvesting (EH) and the quality of service (QoS). In this paper, we investigate the problem of power consumption in a multiuser MEC system with EH devices. The system power consumption, which includes the local execution power and the offloading transmission power, is designated as the main system performance index. First, we formulate the power consumption minimization problem with the battery queue stability and QoS constraints as a stochastic optimization programming, which is difficult to solve due to the time-coupling constraints. Then, we adopt the Lyapunov optimization approach to tackle the problem by reformulating it into a problem with relaxed queue stability constraints. We design an online algorithm based on the Lyapunov optimization method, which only uses current states of the mobile users and does not depend on the system statistic information. Furthermore, we propose a distributed algorithm based on the alternating direction method of multipliers to reduce the system computational complexity. We prove the optimality of the online algorithm and the distributed algorithm using rigorous theoretical analysis. Finally, we perform extensive trace-simulations to verify the theoretical results and evaluate the effectiveness of the proposed algorithms.) <|cite_end|>. However, it cannot be directly applied for the two-timescale resource allocation problem of IIoT due to the coupling between energy resource allocation and communication resource allocation in different timescales and layers.}
\subsection{Contribution}
Motivated by these gaps, we propose a two-timescale resource allocation algorithm for 5G-empowered automated networks in IIoT with hybrid energy supply.
The main objective is to maximize the long-term network utility via the joint optimization of communication and energy resource allocation under dynamic EH, electricity price, channel state, data arrival, as well as the long-term constraints of queue stability and queuing delay.
First, we establish both data and energy queues in different timescales. The joint optimization problem is formulated as a long-term reward-plus-penalty problem, in which the network quality of experience (QoE) is taken as the reward while the energy purchasing cost is taken as the penalty. Then, the long-term problem is further converted to a short-term deterministic problem and decomposed into several subproblems in different timescales by leveraging Lyapunov optimization. Next, by opportunistically minimizing the upper bound of drift-minus-utility, the separated energy management, rate control, channel selection and power allocation subproblems are solved sequentially by using the proposed heurist energy scheduling algorithm, ADMM-based low complexity rate control algorithm, and matching-based joint channel selection and power allocation algorithm, respectively.
The main contributions are summarized as follows.
\begin{itemize}
\item \emph{ Large-timescale energy management optimization under dynamic EH and electricity price:} The proposed algorithm decouples the large-timescale energy management optimization from the small-timescale communication resource allocation. The proposed heuristic energy scheduling algorithm dynamically optimizes the utilization of harvested energy and grid energy without requiring any prior knowledge of future EH and electricity prices.
\item \emph{Small-timescale joint optimization of rate control, channel selection, and power allocation:} The proposed ADMM-based low-complexity rate control algorithm decomposes the large-scale optimization problem into a series of subproblems with lower complexity and accelerates the convergence speed via effective coordination of subproblem solutions. The joint optimization of channel selection and power control is transformed into a one-to-many matching problem and solved by a proposed price-based matching algorithm with quota restriction.
\item \emph{Comprehensive theoretical analysis and performance validation:} We provide a comprehensive theoretical analysis for the proposed algorithm in terms of optimality, convergence, and complexity. Intensive simulation results are conducted under different scenarios to demonstrate its performance gains.
\end{itemize}
\subsection{Organization}
The rest of this paper is organized as follows. System model is described in Section \ref{sec:3}. Problem formulation and transformation are provided in Section \ref{sec:Formulated Problem}. Section \ref{sec:4} elaborates the proposed two-timescale resource allocation algorithm. A comprehensive property analysis is provided in Section \ref{sec:6}. Numerical results and analysis are introduced in
Section \ref{sec:7}. Finally, the conclusion is summarized in Section \ref{sec:8}. <|paper_end|> | [
"<|reference_start|> Power allocation for an energy harvesting transmitter with hybrid energy sources: In this work, we consider a point-to-point communication link where the transmitter has a hybrid supply of energy. Specifically, the hybrid energy is supplied by a constant energy source and an energy harvester, which harvests energy from its surrounding environment and stores it in a battery which suffers from energy leakage. Our goal is to minimize the power consumed by the constant energy source for transmission of a given amount of data in a given number of time intervals. Two scenarios are considered for packet arrival. In the first scenario, we assume that all data packets have arrived before transmission begins, whereas in the second scenario, we assume that data packets are arriving during the course of data transmission. For both scenarios, we propose an optimal offline transmit power allocation scheme which provides insight into how to efficiently consume the energy supplied by the constant energy source and the energy harvester. For offline power allocation, we assume that causal and non-causal information regarding the channel and the amount of harvested energy is available a priori. For optimal online power allocation, we adopt a stochastic dynamic programming (DP) approach for both considered scenarios. For online power allocation, only causal information regarding the channel and the amount of harvested energy is assumed available. Due to the inherent high complexity of DP, we propose suboptimal online algorithms which are appealing because of their low complexity. Simulation results reveal that the offline scheme performs best among all considered schemes and the suboptimal online scheme provides a good performance-complexity tradeoff. <|reference_end|>",
"<|reference_start|> Energy-Aware Traffic Offloading for Green Heterogeneous Networks: With small cell base stations (SBSs) densely deployed in addition to conventional macro base stations (MBSs), the heterogeneous cellular network (HCN) architecture can effectively boost network capacity. To support the huge power demand of HCNs, renewable energy harvesting technologies can be leveraged. In this paper, we aim to make efficient use of the harvested energy for on-grid power saving while satisfying the quality of service (QoS) requirement. To this end, energy-aware traffic offloading schemes are proposed, whereby user associations, ON-OFF states of SBSs, and power control are jointly optimized according to the statistical information of energy arrival and traffic load. Specifically, for the single SBS case, the power saving gain achieved by activating the SBS is derived in closed form, based on which the SBS activation condition and optimal traffic offloading amount are obtained. Furthermore, a two-stage energy-aware traffic offloading (TEATO) scheme is proposed for the multiple-SBS case, considering various operating characteristics of SBSs with different power sources. Simulation results demonstrate that the proposed scheme can achieve more than 50% power saving gain for typical daily traffic and solar energy profiles, compared with the conventional traffic offloading schemes. <|reference_end|>",
"<|reference_start|> Cocycles and Lyapunov Exponents: <|reference_end|>",
"<|reference_start|> Distributed Energy Management for Multiuser Mobile-Edge Computing Systems With Energy Harvesting Devices and QoS Constraints: Mobile-edge computing (MEC) has evolved as a promising technology to alleviate the computing pressure of mobile devices by offloading computation tasks to MEC server. Energy management is challenging since the unpredictability of the energy harvesting (EH) and the quality of service (QoS). In this paper, we investigate the problem of power consumption in a multiuser MEC system with EH devices. The system power consumption, which includes the local execution power and the offloading transmission power, is designated as the main system performance index. First, we formulate the power consumption minimization problem with the battery queue stability and QoS constraints as a stochastic optimization programming, which is difficult to solve due to the time-coupling constraints. Then, we adopt the Lyapunov optimization approach to tackle the problem by reformulating it into a problem with relaxed queue stability constraints. We design an online algorithm based on the Lyapunov optimization method, which only uses current states of the mobile users and does not depend on the system statistic information. Furthermore, we propose a distributed algorithm based on the alternating direction method of multipliers to reduce the system computational complexity. We prove the optimality of the online algorithm and the distributed algorithm using rigorous theoretical analysis. Finally, we perform extensive trace-simulations to verify the theoretical results and evaluate the effectiveness of the proposed algorithms. <|reference_end|>"
] | [
5,
9,
15,
19
] | {"<|cite_1|>": "ss-1171305", "<|cite_2|>": "ss-1269433", "<|cite_3|>": "ss-763054", "<|cite_4|>": "arxiv-40580", "<|multi_cite_5_1|>": "ss-1658719", "<|multi_cite_5_2|>": "ss-1081061", "<|cite_6|>": "ss-840020", "<|cite_7|>": "ss-1658720", "<|multi_cite_8_1|>": "arxiv-20330", "<|multi_cite_8_2|>": "arxiv-90378", "<|multi_cite_8_3|>": "ss-1658721", "<|cite_9|>": "ss-1658722", "<|cite_10|>": "ss-1658723", "<|cite_11|>": "ss-1658722", "<|cite_12|>": "ss-1658723", "<|cite_13|>": "ss-1831345", "<|cite_14|>": "ss-1658724", "<|cite_15|>": "ss-1831345", "<|cite_16|>": "ss-1266405", "<|cite_17|>": "ss-1658725"} |
2009.06858 | <|paper_start|> Title: Soft policy optimization using dual-track advantage estimator
Abstract: Soft policy optimization using dual-track advantage estimator: In reinforcement learning (RL), we always expect the agent to explore as many states as possible in the initial stage of training and exploit the explored information in the subsequent stage to discover the most returnable trajectory. Based on this principle, in this paper, we soften the proximal policy optimization by introducing the entropy and dynamically setting the temperature coefficient to balance the opportunity of exploration and exploitation. While maximizing the expected reward, the agent will also seek other trajectories to avoid the local optimal policy. Nevertheless, the increase of randomness induced by entropy will reduce the train speed in the early stage. Integrating the temporal-difference (TD) method and the general advantage estimator (GAE), we propose the dual-track advantage estimator (DTAE) to accelerate the convergence of value functions and further enhance the performance of the algorithm. Compared with other on-policy RL algorithms on the Mujoco environment, the proposed method not only significantly speeds up the training but also achieves the most advanced results in cumulative return.
Introduction
Deep reinforcement learning algorithms, which combine the classical RL framework and the high-capacity function approximators (i.e. neural networks) have achieved tremendous advanced results in complicate decision-making tasks such as robotic control <|cite_start|> (Reference: Solving Rubik's Cube with a Robot Hand: We demonstrate that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot. This is made possible by two key components: a novel algorithm, which we call automatic domain randomization (ADR) and a robot platform built for machine learning. ADR automatically generates a distribution over randomized environments of ever-increasing difficulty. Control policies and vision state estimators trained with ADR exhibit vastly improved sim2real transfer. For control policies, memory-augmented models trained on an ADR-generated distribution of environments show clear signs of emergent meta-learning at test time. The combination of ADR with our custom robot platform allows us to solve a Rubik's cube with a humanoid robot hand, which involves both control and state estimation problems. Videos summarizing our results are available: https://openai.com/blog/solving-rubiks-cube/) <|cite_end|>, recommendation systems <|cite_start|> (Reference: Reinforcement Learning for Slate-based Recommender Systems: A Tractable Decomposition and Practical Methodology: Most practical recommender systems focus on estimating immediate user engagement without considering the long-term effects of recommendations on user behavior. Reinforcement learning (RL) methods offer the potential to optimize recommendations for long-term user engagement. However, since users are often presented with slates of multiple items - which may have interacting effects on user choice - methods are required to deal with the combinatorics of the RL action space. In this work, we address the challenge of making slate-based recommendations to optimize long-term value using RL. Our contributions are three-fold. (i) We develop SLATEQ, a decomposition of value-based temporal-difference and Q-learning that renders RL tractable with slates. Under mild assumptions on user choice behavior, we show that the long-term value (LTV) of a slate can be decomposed into a tractable function of its component item-wise LTVs. (ii) We outline a methodology that leverages existing myopic learning-based recommenders to quickly develop a recommender that handles LTV. (iii) We demonstrate our methods in simulation, and validate the scalability of decomposed TD-learning using SLATEQ in live experiments on YouTube.) <|cite_end|> and game playing <|cite_start|> (Reference: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model: Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.) <|cite_end|>, etc. We can divide them into two categories: model-based or model-free RL. In model-based RL, we should learn not only the policy but the model in the optimization. Therefore, model-based RL allows deeper cognition of the environment but it is of storage and time cost since the mapping space from state-action-reward to its next state is extremely huge. The model error as well as the value function error is introduced in the learning. Considering that it is difficult to construct a sufficiently accurate environment in challenging robot control tasks, we focus on the model-free RL to train the agents in this paper.
On-policy learning and off-policy learning are two branches of model-free RL. On-policy RL algorithms require collecting new samples which are generated by the current policy to optimize the policy function at each gradient step. TRPO <|cite_start|> (Reference: Trust Region Policy Optimization: We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.) <|cite_end|> is one of the representative methods of on-policy RL but it is relatively complicated (second-order optimization) to compute and is incompatible with parameter sharing structure such as between the policy and value function or architectures that include noise <|cite_start|> (Reference: Proximal Policy Optimization Algorithms: We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.) <|cite_end|>. PPO <|cite_start|> (Reference: Proximal Policy Optimization Algorithms: We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.) <|cite_end|> which uses the clipped trick and ACKTR <|cite_start|> (Reference: Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation: In this work, we propose to apply trust region optimization to deep reinforcement learning using a recently proposed Kronecker-factored approximation to the curvature. We extend the framework of natural policy gradient and propose to optimize both the actor and the critic using Kronecker-factored approximate curvature (K-FAC) with trust region; hence we call our method Actor Critic using Kronecker-Factored Trust Region (ACKTR). To the best of our knowledge, this is the first scalable trust region natural gradient method for actor-critic methods. It is also a method that learns non-trivial tasks in continuous control as well as discrete control policies directly from raw pixel inputs. We tested our approach across discrete domains in Atari games as well as continuous domains in the MuJoCo environment. With the proposed methods, we are able to achieve higher rewards and a 2- to 3-fold improvement in sample efficiency on average, compared to previous state-of-the-art on-policy actor-critic methods. Code is available at https://github.com/openai/baselines) <|cite_end|> which uses the Kronecker-factored approximate curvature are proposed to reduce the computational complexity and expand the application scope of trust region methods. Although many studies show the high effectiveness of on-policy algorithms <|cite_start|> (Reference: Separated Trust Regions Policy Optimization Method: In this work, we propose a moderate policy update method for reinforcement learning, which encourages the agent to explore more boldly in early episodes but updates the policy more cautious. Based on the maximum entropy framework, we propose a softer objective with more conservative constraints and build the separated trust regions for optimization. To reduce the variance of expected entropy return, a calculated state policy entropy of Gaussian distribution is preferred instead of collecting log probability by sampling. This new method, which we call separated trust region for policy mean and variance (STRMV), can be view as an extension to proximal policy optimization (PPO) but it is gentler for policy update and more lively for exploration. We test our approach on a wide variety of continuous control benchmark tasks in the MuJoCo environment. The experiments demonstrate that STRMV outperforms the previous state of art on-policy methods, not only achieving higher rewards but also improving the sample efficiency.) <|cite_end|> <|cite_start|> (Reference: Understanding the impact of entropy on policy optimization: Entropy regularization is commonly used to improve policy optimization in reinforcement learning. It is believed to help with \emph{exploration} by encouraging the selection of more stochastic policies. In this work, we analyze this claim using new visualizations of the optimization landscape based on randomly perturbing the loss function. We first show that even with access to the exact gradient, policy optimization is difficult due to the geometry of the objective function. Then, we qualitatively show that in some environments, a policy with higher entropy can make the optimization landscape smoother, thereby connecting local optima and enabling the use of larger learning rates. This paper presents new tools for understanding the optimization landscape, shows that policy entropy serves as a regularizer, and highlights the challenge of designing general-purpose policy optimization algorithms.) <|cite_end|> <|cite_start|> (Reference: On-policy Reinforcement Learning with Entropy Regularization: Entropy regularization is an imported idea in reinforcement learning, with great success in recent algorithms like Soft Actor Critic and Soft Q Network. In this work we extend this idea into the on-policy realm. With the soft gradient policy theorem, we construct the maximum entropy reinforcement learning framework for on-policy RL. For policy gradient based on-policy algorithms, policy network is often represented as Gaussian distribution with the action variance restricted to be global for all the states observed from the environment. We propose an idea called action variance scale for policy network and find it can work collaboratively with the idea of entropy regularization. In this paper, we choose the state-of-the-art on-policy algorithm, Proximal Policy Optimization, as our basal algorithm and present Soft Proximal Policy Optimization (SPPO). PPO is a popular on-policy RL algorithm with great stability and parallelism. But like many on-policy algorithm, PPO can also suffer from low sample efficiency and local optimum problem. In the entropy-regularized framework, SPPO can guide the agent to succeed at the task while maintaining exploration by acting as randomly as possible. Our method outperforms prior works on a range of continuous control benchmark tasks, Furthermore, our method can be easily extended to large scale experiment and achieve stable learning at high throughput.) <|cite_end|>, they are still criticized for sample inefficient because of the large demand for new samples at each batch. Off-policy RL algorithms use the experience buffer to reuse the past samples and thereupon are data efficient. The main contenders are Q-function based methods <|cite_start|> (Reference: Playing Atari with Deep Reinforcement Learning: We present the first deep learning model to successfully learn control policies directly from high-dimensional sensory input using reinforcement learning. The model is a convolutional neural network, trained with a variant of Q-learning, whose input is raw pixels and whose output is a value function estimating future rewards. We apply our method to seven Atari 2600 games from the Arcade Learning Environment, with no adjustment of the architecture or learning algorithm. We find that it outperforms all previous approaches on six of the games and surpasses a human expert on three of them.) <|cite_end|> <|cite_start|> (Reference: Dueling Network Architectures for Deep Reinforcement Learning: In recent years there have been many successes of using deep representations in reinforcement learning. Still, many of these applications use conventional architectures, such as convolutional networks, LSTMs, or auto-encoders. In this paper, we present a new neural network architecture for model-free reinforcement learning. Our dueling network represents two separate estimators: one for the state value function and one for the state-dependent action advantage function. The main benefit of this factoring is to generalize learning across actions without imposing any change to the underlying reinforcement learning algorithm. Our results show that this architecture leads to better policy evaluation in the presence of many similar-valued actions. Moreover, the dueling architecture enables our RL agent to outperform the state-of-the-art on the Atari 2600 domain.) <|cite_end|> and actor-critic <|cite_start|> (Reference: Asynchronous Methods for Deep Reinforcement Learning: We propose a conceptually simple and lightweight framework for deep reinforcement learning that uses asynchronous gradient descent for optimization of deep neural network controllers. We present asynchronous variants of four standard reinforcement learning algorithms and show that parallel actor-learners have a stabilizing effect on training allowing all four methods to successfully train neural network controllers. The best performing method, an asynchronous variant of actor-critic, surpasses the current state-of-the-art on the Atari domain while training for half the time on a single multi-core CPU instead of a GPU. Furthermore, we show that asynchronous actor-critic succeeds on a wide variety of continuous motor control problems as well as on a new task of navigating random 3D mazes using a visual input.) <|cite_end|>. For example, Schaul et al. proposed the prioritized experience replay to schedule samples and further speed up the optimization of the value function <|cite_start|> (Reference: Prioritized Experience Replay: Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.) <|cite_end|>. Hasselt et al. proposed double Q-learning <|cite_start|> (Reference: Deep Reinforcement Learning with Double Q-learning: The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.) <|cite_end|> and Fujimoto et al. proposed TD3 <|cite_start|> (Reference: Addressing Function Approximation Error in Actor-Critic Methods: In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.) <|cite_end|> to solve the overestimation problem in off-policy RL. Lillicrap et al. combined actor-critic and deterministic policy gradient to learn competitive policies for tasks with the continuous action space <|cite_start|> (Reference: Continuous control with deep reinforcement learning: We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.) <|cite_end|>, which are difficult for value based off-policy RL algorithms. Haarnoja introduced the maximum entropy framework to actor-critic to increase the exploration of the agent <|cite_start|> (Reference: Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor: Model-free deep reinforcement learning (RL) algorithms have been demonstrated on a range of challenging decision making and control tasks. However, these methods typically suffer from two major challenges: very high sample complexity and brittle convergence properties, which necessitate meticulous hyperparameter tuning. Both of these challenges severely limit the applicability of such methods to complex, real-world domains. In this paper, we propose soft actor-critic, an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep RL methods based on this framework have been formulated as Q-learning methods. By combining off-policy updates with a stable stochastic actor-critic formulation, our method achieves state-of-the-art performance on a range of continuous control benchmark tasks, outperforming prior on-policy and off-policy methods. Furthermore, we demonstrate that, in contrast to other off-policy algorithms, our approach is very stable, achieving very similar performance across different random seeds.) <|cite_end|>. Nevertheless, the algorithms which combine the off-policy learning RL and deep neural networks present challenges in terms of stability and convergence, especially for high-dimensional continuous control tasks <|cite_start|> (Reference: Sample Efficient Actor-Critic with Experience Replay: This paper presents an actor-critic deep reinforcement learning agent with experience replay that is stable, sample efficient, and performs remarkably well on challenging environments, including the discrete 57-game Atari domain and several continuous control problems. To achieve this, the paper introduces several innovations, including truncated importance sampling with bias correction, stochastic dueling network architectures, and a new trust region policy optimization method.) <|cite_end|> <|cite_start|> (Reference: Convergent Temporal-Difference Learning with Arbitrary Smooth Function Approximation: We introduce the first temporal-difference learning algorithms that converge with smooth value function approximators, such as neural networks. Conventional temporal-difference (TD) methods, such as TD(λ), Q-learning and Sarsa have been used successfully with function approximation in many applications. However, it is well known that off-policy sampling, as well as nonlinear function approximation, can cause these algorithms to become unstable (i.e., the parameters of the approximator may diverge). Sutton et al. (2009a, 2009b) solved the problem of off-policy learning with linear TD algorithms by introducing a new objective function, related to the Bellman error, and algorithms that perform stochastic gradient-descent on this function. These methods can be viewed as natural generalizations to previous TD methods, as they converge to the same limit points when used with linear function approximation methods. We generalize this work to nonlinear function approximation. We present a Bellman error objective function and two gradient-descent TD algorithms that optimize it. We prove the asymptotic almost-sure convergence of both algorithms, for any finite Markov decision process and any smooth value function approximator, to a locally optimal solution. The algorithms are incremental and the computational complexity per time step scales linearly with the number of parameters of the approximator. Empirical results obtained in the game of Go demonstrate the algorithms' effectiveness.) <|cite_end|>.
From above analysis, there are two deficiencies that limit the application of model-free RL: (1) the algorithms are sample inefficiency and require tons of samples to optimize the value and policy function; (2) some algorithms are difficult to converge and are sensitive to hyper-parameters or time seeds. Therefore, in this paper, we aim to design a reliable and efficient on-policy RL algorithm for challenging continuous robotic control tasks. First, we introduce the entropy term to the objective to balance the opportunity of exploration and exploitation in RL (soft policy optimization). By dynamically setting the temperature coefficient, the agent will explore more states in the initial stage of training and subsequently exploit the explored policy to find more returnable trajectories. Nevertheless, the cumulative rate of return in the early stage will also be weakened meanwhile since the agent tends to adopt a more stochastic action rather than the greedy deterministic action. To tackle this problem, we present the concept of shadow value function and shadow policy function. We use the TD method to update the shadow value function and derive the TD advantage estimator (TDAE). TD methods have the faster convergence speed but the value functions may be unstable during the optimization. On the contrary, GAE is more cautious when updating their parameter vectors. Integrating TDAE and GAE, we propose the dual-track advantage estimator to accelerate the optimization of value function and indirectly improve the sample utilization efficiency. Theoretically, we have strictly proved that the soft policy optimization can improve the policy in each iteration. Results show the proposed algorithm called SPOD can not only significantly speed up the training in the early stage but also performs excellent in accumulating return. <|paper_end|> | [
"<|reference_start|> Solving Rubik's Cube with a Robot Hand: We demonstrate that models trained only in simulation can be used to solve a manipulation problem of unprecedented complexity on a real robot. This is made possible by two key components: a novel algorithm, which we call automatic domain randomization (ADR) and a robot platform built for machine learning. ADR automatically generates a distribution over randomized environments of ever-increasing difficulty. Control policies and vision state estimators trained with ADR exhibit vastly improved sim2real transfer. For control policies, memory-augmented models trained on an ADR-generated distribution of environments show clear signs of emergent meta-learning at test time. The combination of ADR with our custom robot platform allows us to solve a Rubik's cube with a humanoid robot hand, which involves both control and state estimation problems. Videos summarizing our results are available: https://openai.com/blog/solving-rubiks-cube/ <|reference_end|>",
"<|reference_start|> Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model: Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules. <|reference_end|>",
"<|reference_start|> Trust Region Policy Optimization: We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters. <|reference_end|>",
"<|reference_start|> Separated Trust Regions Policy Optimization Method: In this work, we propose a moderate policy update method for reinforcement learning, which encourages the agent to explore more boldly in early episodes but updates the policy more cautious. Based on the maximum entropy framework, we propose a softer objective with more conservative constraints and build the separated trust regions for optimization. To reduce the variance of expected entropy return, a calculated state policy entropy of Gaussian distribution is preferred instead of collecting log probability by sampling. This new method, which we call separated trust region for policy mean and variance (STRMV), can be view as an extension to proximal policy optimization (PPO) but it is gentler for policy update and more lively for exploration. We test our approach on a wide variety of continuous control benchmark tasks in the MuJoCo environment. The experiments demonstrate that STRMV outperforms the previous state of art on-policy methods, not only achieving higher rewards but also improving the sample efficiency. <|reference_end|>"
] | [
0,
2,
3,
7
] | {"<|cite_1|>": "arxiv-229065", "<|cite_2|>": "arxiv-206825", "<|cite_3|>": "arxiv-235062", "<|cite_4|>": "arxiv-73321", "<|cite_5|>": "arxiv-129813", "<|cite_6|>": "arxiv-129813", "<|cite_7|>": "arxiv-132151", "<|multi_cite_8_1|>": "ss-2152328", "<|multi_cite_8_2|>": "arxiv-182112", "<|multi_cite_8_3|>": "ss-2152329", "<|multi_cite_9_1|>": "arxiv-54263", "<|multi_cite_9_2|>": "arxiv-87698", "<|cite_10|>": "arxiv-91622", "<|cite_11|>": "arxiv-87502", "<|cite_12|>": "arxiv-84365", "<|cite_13|>": "arxiv-149723", "<|cite_14|>": "arxiv-83736", "<|cite_15|>": "arxiv-144586", "<|multi_cite_16_1|>": "arxiv-109317", "<|multi_cite_16_2|>": "ss-1979391"} |
2407.15913 | <|paper_start|> Title: Test-Time Low Rank Adaptation via Confidence Maximization for Zero-Shot Generalization of Vision-Language Models
Abstract: Test-Time Low Rank Adaptation via Confidence Maximization for Zero-Shot Generalization of Vision-Language Models: The conventional modus operandi for adapting pre-trained vision-language models (VLMs) during test-time involves tuning learnable prompts, ie, test-time prompt tuning. This paper introduces Test-Time Low-rank adaptation (TTL) as an alternative to prompt tuning for zero-shot generalization of large-scale VLMs. Taking inspiration from recent advancements in efficiently fine-tuning large language models, TTL offers a test-time parameter-efficient adaptation approach that updates the attention weights of the transformer encoder by maximizing prediction confidence. The self-supervised confidence maximization objective is specified using a weighted entropy loss that enforces consistency among predictions of augmented samples. TTL introduces only a small amount of trainable parameters for low-rank adapters in the model space while keeping the prompts and backbone frozen. Extensive experiments on a variety of natural distribution and cross-domain tasks show that TTL can outperform other techniques for test-time optimization of VLMs in strict zero-shot settings. Specifically, TTL outperforms test-time prompt tuning baselines with a significant improvement on average. Our code is available at at https://github.com/Razaimam45/TTL-Test-Time-Low-Rank-Adaptation.
Introduction
\label{sec:intro}
In recent years, foundational vision-language models (VLMs) such as CLIP <|cite_start|> (Reference: Contrastive Test-Time Adaptation: Test-time adaptation is a special setting of unsupervised domain adaptation where a trained model on the source domain has to adapt to the target domain without accessing source data. We propose a novel way to leverage self-supervised contrastive learning to facilitate target feature learning, along with an online pseudo labeling scheme with refinement that significantly denoises pseudo labels. The contrastive learning task is applied jointly with pseudo labeling, contrasting positive and negative pairs constructed similarly as MoCo but with source-initialized encoder, and excluding same-class negative pairs indicated by pseudo labels. Meanwhile, we produce pseudo labels online and refine them via soft voting among their nearest neighbors in the target feature space, enabled by maintaining a memory queue. Our method, AdaContrast, achieves state-of-the-art performance on major benchmarks while having several desirable properties compared to existing works, including memory efficiency, insensitivity to hyper-parameters, and better model calibration. Project page: sites.google.com/view/adacontrast.) <|cite_end|> have significantly transformed the landscape of computer vision by demonstrating remarkable proficiency in encoding diverse tasks and concepts. Trained on extensive datasets comprising millions of image-text pairs, these models exhibit decent generalizability across a spectrum of tasks. However, the process of adapting these models for specific downstream tasks through \textit{fine-tuning} often results in a compromise on their inherent generalization capabilities <|cite_start|> (Reference: Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution: When transferring a pretrained model to a downstream task, two popular methods are full fine-tuning (updating all the model parameters) and linear probing (updating only the last linear layer -- the "head"). It is well known that fine-tuning leads to better accuracy in-distribution (ID). However, in this paper, we find that fine-tuning can achieve worse accuracy than linear probing out-of-distribution (OOD) when the pretrained features are good and the distribution shift is large. On 10 distribution shift datasets (Breeds-Living17, Breeds-Entity30, DomainNet, CIFAR $\to$ STL, CIFAR10.1, FMoW, ImageNetV2, ImageNet-R, ImageNet-A, ImageNet-Sketch), fine-tuning obtains on average 2% higher accuracy ID but 7% lower accuracy OOD than linear probing. We show theoretically that this tradeoff between ID and OOD accuracy arises even in a simple setting: fine-tuning overparameterized two-layer linear networks. We prove that the OOD error of fine-tuning is high when we initialize with a fixed or random head -- this is because while fine-tuning learns the head, the lower layers of the neural network change simultaneously and distort the pretrained features. Our analysis suggests that the easy two-step strategy of linear probing then full fine-tuning (LP-FT), sometimes used as a fine-tuning heuristic, combines the benefits of both fine-tuning and linear probing. Empirically, LP-FT outperforms both fine-tuning and linear probing on the above datasets (1% better ID, 10% better OOD than full fine-tuning).) <|cite_end|> <|cite_start|> (Reference: Robust fine-tuning of zero-shot models: Large pre-trained models such as CLIP or ALIGN offer consistent accuracy across a range of data distributions when performing zero-shot inference (i.e., without fine-tuning on a specific dataset). Although existing fine-tuning methods substantially improve accuracy on a given target distribution, they often reduce robustness to distribution shifts. We address this tension by introducing a simple and effective method for improving robustness while fine-tuning: ensembling the weights of the zero-shot and fine-tuned models (WiSE-FT). Compared to standard fine-tuning, WiSE-FT provides large accuracy improvements under distribution shift, while preserving high accuracy on the target distribution. On ImageNet and five derived distribution shifts, WiSE-FT improves accuracy under distribution shift by 4 to 6 percentage points (pp) over prior work while increasing ImageNet accuracy by 1.6 pp. WiSE-FT achieves similarly large robustness gains (2 to 23 pp) on a diverse set of six further distribution shifts, and accuracy gains of 0.8 to 3.3 pp compared to standard fine-tuning on seven commonly used transfer learning datasets. These improvements come at no additional computational cost during fine-tuning or inference.) <|cite_end|>.
To address this challenge, recent works propose the incorporation of learnable prompts into the CLIP model, either in the textual <|cite_start|> (Reference: Learning to Prompt for Vision-Language Models: Large pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks. Different from the traditional representation learning that is based mostly on discretized labels, vision-language pre-training aligns images and texts in a common feature space, which allows zero-shot transfer to a downstream task via prompting, i.e., classification weights are synthesized from natural language describing classes of interest. In this work, we show that a major challenge for deploying such models in practice is prompt engineering, which requires domain expertise and is extremely time-consuming -- one needs to spend a significant amount of time on words tuning since a slight change in wording could have a huge impact on performance. Inspired by recent advances in prompt learning research in natural language processing (NLP), we propose Context Optimization (CoOp), a simple approach specifically for adapting CLIP-like vision-language models for downstream image recognition. Concretely, CoOp models a prompt's context words with learnable vectors while the entire pre-trained parameters are kept fixed. To handle different image recognition tasks, we provide two implementations of CoOp: unified context and class-specific context. Through extensive experiments on 11 datasets, we demonstrate that CoOp requires as few as one or two shots to beat hand-crafted prompts with a decent margin and is able to gain significant improvements over prompt engineering with more shots, e.g., with 16 shots the average gain is around 15% (with the highest reaching over 45%). Despite being a learning-based approach, CoOp achieves superb domain generalization performance compared with the zero-shot model using hand-crafted prompts.) <|cite_end|> <|cite_start|> (Reference: Conditional Prompt Learning for Vision-Language Models: With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets. A recently proposed method named Context Optimization (CoOp) introduces the concept of prompt learning -- a recent trend in NLP -- to the vision domain for adapting pre-trained vision-language models. Specifically, CoOp turns context words in a prompt into a set of learnable vectors and, with only a few labeled images for learning, can achieve huge improvements over intensively-tuned manual prompts. In our study we identify a critical problem of CoOp: the learned context is not generalizable to wider unseen classes within the same dataset, suggesting that CoOp overfits base classes observed during training. To address the problem, we propose Conditional Context Optimization (CoCoOp), which extends CoOp by further learning a lightweight neural network to generate for each image an input-conditional token (vector). Compared to CoOp's static prompts, our dynamic prompts adapt to each instance and are thus less sensitive to class shift. Extensive experiments show that CoCoOp generalizes much better than CoOp to unseen classes, even showing promising transferability beyond a single dataset; and yields stronger domain generalization performance as well. Code is available at https://github.com/KaiyangZhou/CoOp.) <|cite_end|> <|cite_start|> (Reference: Learning to Prompt with Text Only Supervision for Vision-Language Models: Foundational vision-language models such as CLIP are becoming a new paradigm in vision, due to their excellent generalization abilities. However, adapting these models for downstream tasks while maintaining their generalization remains a challenge. In literature, one branch of methods adapts CLIP by learning prompts using visual information. While effective, most of these works require labeled data which is not practical, and often struggle to generalize towards new datasets due to over-fitting on the source data. An alternative approach resorts to training-free methods by generating class descriptions from large language models (LLMs) and perform prompt ensembling. However, these methods often generate class specific prompts that cannot be transferred to other classes, which incur higher costs by generating LLM descriptions for each class separately. In this work, we propose to combine the strengths of these both streams of methods by learning prompts using only text data derived from LLMs. As supervised training of prompts is not trivial due to absence of images, we develop a training approach that allows prompts to extract rich contextual knowledge from LLM data. Moreover, with LLM contextual data mapped within the learned prompts, it enables zero-shot transfer of prompts to new classes and datasets potentially cutting the LLM prompt engineering cost. To the best of our knowledge, this is the first work that learns generalized prompts using text only data. We perform extensive evaluations on 4 benchmarks where our method improves over prior ensembling works while being competitive to those utilizing labeled images. Our code and pre-trained models are available at https://github.com/muzairkhattak/ProText.) <|cite_end|> or visual <|cite_start|> (Reference: Visual Prompt Tuning: The current modus operandi in adapting pre-trained models involves updating all the backbone parameters, ie, full fine-tuning. This paper introduces Visual Prompt Tuning (VPT) as an efficient and effective alternative to full fine-tuning for large-scale Transformer models in vision. Taking inspiration from recent advances in efficiently tuning large language models, VPT introduces only a small amount (less than 1% of model parameters) of trainable parameters in the input space while keeping the model backbone frozen. Via extensive experiments on a wide variety of downstream recognition tasks, we show that VPT achieves significant performance gains compared to other parameter efficient tuning protocols. Most importantly, VPT even outperforms full fine-tuning in many cases across model capacities and training data scales, while reducing per-task storage cost.) <|cite_end|> branch, or both <|cite_start|> (Reference: MaPLe: Multi-modal Prompt Learning: Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.) <|cite_end|> <|cite_start|> (Reference: Self-regulating Prompts: Foundational Model Adaptation without Forgetting: Prompt learning has emerged as an efficient alternative for fine-tuning foundational models, such as CLIP, for various downstream tasks. Conventionally trained using the task-specific objective, i.e., cross-entropy loss, prompts tend to overfit downstream data distributions and find it challenging to capture task-agnostic general features from the frozen CLIP. This leads to the loss of the model's original generalization capability. To address this issue, our work introduces a self-regularization framework for prompting called PromptSRC (Prompting with Self-regulating Constraints). PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations using a three-pronged approach by: (a) regulating prompted representations via mutual agreement maximization with the frozen model, (b) regulating with self-ensemble of prompts over the training trajectory to encode their complementary strengths, and (c) regulating with textual diversity to mitigate sample diversity imbalance with the visual branch. To the best of our knowledge, this is the first regularization framework for prompt learning that avoids overfitting by jointly attending to pre-trained model features, the training trajectory during prompting, and the textual diversity. PromptSRC explicitly steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization. We perform extensive experiments on 4 benchmarks where PromptSRC overall performs favorably well compared to the existing methods. Our code and pre-trained models are publicly available at: https://github.com/muzairkhattak/PromptSRC.) <|cite_end|>. This allows for fine-tuning only the added prompts using a few samples from the target distribution, while keeping the rest of the model frozen. While this approach has been quite effective, fine-tuning on domain-specific data inevitably diminishes the VLM's ability to generalize to unseen domains.
\begin{figure}[!t]
\centering
\begin{subfigure}[b]{0.24\textwidth}
\includegraphics[width=\textwidth]{Figures/entropy_octiles.pdf}
\caption{Entropy level \textit{vs.} Accuracy}
\label{fig:entropy_octiles}
\end{subfigure}
\begin{subfigure}[b]{0.22\textwidth}
\includegraphics[width=\textwidth]{Figures/gaussian.pdf}
\caption{Visual feature statistics}
\label{fig:gaussian}
\end{subfigure}
\caption{(a) Entropy corresponding to 8 different octiles result in different performance for Flowers102. (b) TTL implicitly align features such that the mean embeddings of test samples better align with that of source data (LAION) on which CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|> is trained.}
\vspace{-0.4cm}
\end{figure}
Test-Time Prompt Tuning (TPT) <|cite_start|> (Reference: Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models: Pre-trained vision-language models (e.g., CLIP) have shown promising zero-shot generalization in many downstream tasks with properly designed text prompts. Instead of relying on hand-engineered prompts, recent works learn prompts using the training data from downstream tasks. While effective, training on domain-specific data reduces a model's generalization capability to unseen new domains. In this work, we propose test-time prompt tuning (TPT), a method that can learn adaptive prompts on the fly with a single test sample. For image classification, TPT optimizes the prompt by minimizing the entropy with confidence selection so that the model has consistent predictions across different augmented views of each test sample. In evaluating generalization to natural distribution shifts, TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average, surpassing previous prompt tuning approaches that require additional task-specific training data. In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data. Project page: https://azshue.github.io/TPT.) <|cite_end|> was introduced as an alternative to few-shot prompt learning, where the prompts are updated dynamically on the fly for each test sample. However, TPT overlooks the \textit{distribution shift} between the training data of the CLIP model and the test samples, resulting in a subpar performance.
To address the distribution shift, PromptAlign <|cite_start|> (Reference: Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization: The promising zero-shot generalization of vision-language models such as CLIP has led to their adoption using prompt learning for numerous downstream tasks. Previous works have shown test-time prompt tuning using entropy minimization to adapt text prompts for unseen domains. While effective, this overlooks the key cause for performance degradation to unseen domains -- distribution shift. In this work, we explicitly handle this problem by aligning the out-of-distribution (OOD) test sample statistics to those of the source data using prompt tuning. We use a single test sample to adapt multi-modal prompts at test time by minimizing the feature distribution shift to bridge the gap in the test domain. Evaluating against the domain generalization benchmark, our method improves zero-shot top- 1 accuracy beyond existing prompt-learning techniques, with a 3.08% improvement over the baseline MaPLe. In cross-dataset generalization with unseen categories across 10 datasets, our method improves consistently across all datasets compared to the existing state-of-the-art. Our source code and models are available at https://jameelhassan.github.io/promptalign.) <|cite_end|> attempts to align the first-order statistics of test sample with the training data of the CLIP model. However, this approach necessitates access to a proxy dataset mimicking the distribution of CLIP training data. Additionally, it necessitates the use of pre-trained prompts for initialization, which compromises the \textit{strict zero-shot} assumption. Moreover, the token alignment achieved by PromptAlign is not as precise as that of our method, as illustrated in Figure \ref{fig:gaussian}.
\begin{figure*}[t]
\centering
\begin{subfigure}[b]{0.215\textwidth}
\includegraphics[width=\textwidth]{Figures/existing.pdf}
\caption{Existing ZS methods}
\label{fig:existing_methods}
\end{subfigure}
\begin{subfigure}[b]{0.225\textwidth}
\includegraphics[width=\textwidth]{Figures/ours.pdf}
\caption{TTL (Ours)}
\label{fig:ours_overview}
\end{subfigure}
\begin{subfigure}[b]{0.275\textwidth}
\includegraphics[width=\textwidth]{Figures/star_plot2.pdf}
\caption{Results on visual classification tasks}
\label{fig:star_plot}
\end{subfigure}
\hspace{0.2cm}
\begin{minipage}[b]{0.25\textwidth}
\caption{\textbf{TTL \textit{vs.} other zero-shot optimization methods.} (a) Current methods <|cite_start|> (Reference: Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models: Pre-trained vision-language models (e.g., CLIP) have shown promising zero-shot generalization in many downstream tasks with properly designed text prompts. Instead of relying on hand-engineered prompts, recent works learn prompts using the training data from downstream tasks. While effective, training on domain-specific data reduces a model's generalization capability to unseen new domains. In this work, we propose test-time prompt tuning (TPT), a method that can learn adaptive prompts on the fly with a single test sample. For image classification, TPT optimizes the prompt by minimizing the entropy with confidence selection so that the model has consistent predictions across different augmented views of each test sample. In evaluating generalization to natural distribution shifts, TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average, surpassing previous prompt tuning approaches that require additional task-specific training data. In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data. Project page: https://azshue.github.io/TPT.) <|cite_end|> <|cite_start|> (Reference: Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning: Benefiting from prompt tuning, recent years have witnessed the promising performance of pre-trained vision-language models, e.g., CLIP, on versatile downstream tasks. In this paper, we focus on a particular setting of learning adaptive prompts on the fly for each test sample from an unseen new domain, which is known as test-time prompt tuning (TPT). Existing TPT methods typically rely on data augmentation and confidence selection. However, conventional data augmentation techniques, e.g., random resized crops, suffers from the lack of data diversity, while entropy-based confidence selection alone is not sufficient to guarantee prediction fidelity. To address these issues, we propose a novel TPT method, named DiffTPT, which leverages pre-trained diffusion models to generate diverse and informative new data. Specifically, we incorporate augmented data by both conventional method and pre-trained stable diffusion to exploit their respective merits, improving the models ability to adapt to unknown new test data. Moreover, to ensure the prediction fidelity of generated data, we introduce a cosine similarity-based filtration technique to select the generated data with higher similarity to the single test sample. Our experiments on test datasets with distribution shifts and unseen categories demonstrate that DiffTPT improves the zero-shot accuracy by an average of 5.13\% compared to the state-of-the-art TPT method. Our code and models will be publicly released.) <|cite_end|> <|cite_start|> (Reference: Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization: The promising zero-shot generalization of vision-language models such as CLIP has led to their adoption using prompt learning for numerous downstream tasks. Previous works have shown test-time prompt tuning using entropy minimization to adapt text prompts for unseen domains. While effective, this overlooks the key cause for performance degradation to unseen domains -- distribution shift. In this work, we explicitly handle this problem by aligning the out-of-distribution (OOD) test sample statistics to those of the source data using prompt tuning. We use a single test sample to adapt multi-modal prompts at test time by minimizing the feature distribution shift to bridge the gap in the test domain. Evaluating against the domain generalization benchmark, our method improves zero-shot top- 1 accuracy beyond existing prompt-learning techniques, with a 3.08% improvement over the baseline MaPLe. In cross-dataset generalization with unseen categories across 10 datasets, our method improves consistently across all datasets compared to the existing state-of-the-art. Our source code and models are available at https://jameelhassan.github.io/promptalign.) <|cite_end|> update prompts during inference using self-entropy. (b) TTL introduces low-rank learnable weight matrices at the attention layer of the vision encoder to update the model weights using weighted entropy. (c) TTL outperforms existing baselines across Out-of-Distribution and Cross-Dataset while using less than 0.1\% of all model parameters.}
\label{fig:overview}
\end{minipage}
\vspace{-0.4cm}
\end{figure*}
To address the above limitations of existing test-time adaptation methods, we introduce \textbf{T}est-\textbf{T}ime \textbf{L}ow-rank adaptation (\textbf{TTL}), a parameter-efficient test-time adaptation strategy for VLMs like CLIP. TTL \textit{eliminates} the need for source data distribution during adaptation or pre-trained weights for initialization (Figure \ref{fig:overview}). Originally designed for adapting Large Language Models (LLMs) to new domains, low rank adaptation (LoRA) <|cite_start|> (Reference: LoRA: Low-Rank Adaptation of Large Language Models: An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.) <|cite_end|> has been extensively applied in various multi-modal and generative computer vision tasks <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> <|cite_start|> (Reference: LISA: Reasoning Segmentation via Large Language Model: Although perception systems have made remarkable advancements in recent years, they still rely on explicit human instruction or pre-defined categories to identify the target objects before executing visual recognition tasks. Such systems cannot actively reason and comprehend implicit user intention. In this work, we propose a new segmentation task -- reasoning segmentation. The task is designed to output a segmentation mask given a complex and implicit query text. Furthermore, we establish a benchmark comprising over one thousand image-instruction-mask data samples, incorporating intricate reasoning and world knowledge for evaluation purposes. Finally, we present LISA: large Language Instructed Segmentation Assistant, which inherits the language generation capabilities of multimodal Large Language Models (LLMs) while also possessing the ability to produce segmentation masks. We expand the original vocabulary with a <SEG> token and propose the embedding-as-mask paradigm to unlock the segmentation capability. Remarkably, LISA can handle cases involving complex reasoning and world knowledge. Also, it demonstrates robust zero-shot capability when trained exclusively on reasoning-free datasets. In addition, fine-tuning the model with merely 239 reasoning segmentation data samples results in further performance enhancement. Both quantitative and qualitative experiments show our method effectively unlocks new reasoning segmentation capabilities for multimodal LLMs. Code, models, and data are available at https://github.com/dvlab-research/LISA.) <|cite_end|> <|cite_start|> (Reference: MiniGPT-v2: large language model as a unified interface for vision-language multi-task learning: Large language models have shown their remarkable capabilities as a general interface for various language-related applications. Motivated by this, we target to build a unified interface for completing many vision-language tasks including image description, visual question answering, and visual grounding, among others. The challenge is to use a single model for performing diverse vision-language tasks effectively with simple multi-modal instructions. Towards this objective, we introduce MiniGPT-v2, a model that can be treated as a unified interface for better handling various vision-language tasks. We propose using unique identifiers for different tasks when training the model. These identifiers enable our model to better distinguish each task instruction effortlessly and also improve the model learning efficiency for each task. After the three-stage training, the experimental results show that MiniGPT-v2 achieves strong performance on many visual question-answering and visual grounding benchmarks compared to other vision-language generalist models. Our model and codes are available at https://minigpt-v2.github.io/) <|cite_end|> <|cite_start|> (Reference: Aligning large multi-modal model with robust instruction tuning: Despite the promising progress in multi-modal tasks, current large multi-modal models (LMM) are prone to hallucinating inconsistent descriptions with respect to the associated image and human instructions. This paper addresses this issue by introducing the first large and diverse visual instruction tuning dataset, named Large-scale Robust Visual (LRV)-Instruction . Our dataset consists of 120k visual instructions generated by GPT4, covering 16 vision-and-language tasks with open-ended instructions and answers. Unlike existing studies that primarily focus on positive instruction samples, we design LRV-Instruction to include both positive and negative instructions for more robust visual instruction tuning. Our negative instructions are designed at two semantic levels: (i) Nonexistent Element Manipulation and (ii) Existent Element Manipulation . To efficiently measure the hallucination generated by LMMs, we propose GPT4-Assisted Visual Instruction Evaluation (GAVIE) , a novel approach to evaluate visual instruction tuning without the need for human-annotated groundtruth answers and can adapt to diverse instruction formats. We conduct comprehensive experiments to investigate the hallucination of LMMs. Our results demonstrate that existing LMMs exhibit significant hallucination when presented with our negative instructions, particularly with Existent Element Manipulation instructions. Moreover, by finetuning MiniGPT4 on LRV-Instruction , we successfully mitigate hallucination while improving performance on public datasets using less training data compared to state-of-the-art methods. Additionally, we observed that a balanced ratio of positive and negative instances in the training data leads to a more robust model. Our project link is available at this link.) <|cite_end|> <|cite_start|> (Reference: Going Beyond Nouns With Vision & Language Models Using Synthetic Data: Large-scale pre-trained Vision & Language (VL) models have shown remarkable performance in many applications, enabling replacing a fixed set of supported classes with zero-shot open vocabulary reasoning over (almost arbitrary) natural language prompts. However, recent works have uncovered a fundamental weakness of these models. For example, their difficulty to understand Visual Language Concepts (VLC) that go 'beyond nouns' such as the meaning of non-object words (e.g., attributes, actions, relations, states, etc.), or difficulty in performing compositional reasoning such as understanding the significance of the order of the words in a sentence. In this work, we investigate to which extent purely synthetic data could be leveraged to teach these models to overcome such shortcomings without compromising their zero-shot capabilities. We contribute Synthetic Visual Concepts (SyViC) - a million-scale synthetic dataset and data generation codebase allowing to generate additional suitable data to improve VLC understanding and compositional reasoning of VL models. Additionally, we propose a general VL finetuning strategy for effectively leveraging SyViC towards achieving these improvements. Our extensive experiments and ablations on VL-Checklist, Winoground, and ARO benchmarks demonstrate that it is possible to adapt strong pre-trained VL models with synthetic data significantly enhancing their VLC understanding (e.g. by 9.9% on ARO and 4.3% on VL-Checklist) with under 1% drop in their zero-shot accuracy.) <|cite_end|> <|cite_start|> (Reference: Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation: Parameter Efficient Tuning (PET) has gained attention for reducing the number of parameters while maintaining performance and providing better hardware resource savings, but few studies investigate dense prediction tasks and interaction between modalities. In this paper, we do an investigation of efficient tuning problems on referring image segmentation. We propose a novel adapter called Bridger to facilitate cross-modal information exchange and inject task-specific information into the pre-trained model. We also design a lightweight decoder for image segmentation. Our approach achieves comparable or superior performance with only 1.61\% to 3.38\% backbone parameter updates, evaluated on challenging benchmarks. The code is available at \url{https://github.com/kkakkkka/ETRIS}.) <|cite_end|> <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> <|cite_start|> (Reference: AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning: With the advance of text-to-image (T2I) diffusion models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. However, adding motion dynamics to existing high-quality personalized T2Is and enabling them to generate animations remains an open challenge. In this paper, we present AnimateDiff, a practical framework for animating personalized T2I models without requiring model-specific tuning. At the core of our framework is a plug-and-play motion module that can be trained once and seamlessly integrated into any personalized T2Is originating from the same base T2I. Through our proposed training strategy, the motion module effectively learns transferable motion priors from real-world videos. Once trained, the motion module can be inserted into a personalized T2I model to form a personalized animation generator. We further propose MotionLoRA, a lightweight fine-tuning technique for AnimateDiff that enables a pre-trained motion module to adapt to new motion patterns, such as different shot types, at a low training and data collection cost. We evaluate AnimateDiff and MotionLoRA on several public representative personalized T2I models collected from the community. The results demonstrate that our approaches help these models generate temporally smooth animation clips while preserving the visual quality and motion diversity. Codes and pre-trained weights are available at https://github.com/guoyww/AnimateDiff.) <|cite_end|> <|cite_start|> (Reference: Stable Video Diffusion: Scaling Latent Video Diffusion Models to Large Datasets: We present Stable Video Diffusion - a latent video diffusion model for high-resolution, state-of-the-art text-to-video and image-to-video generation. Recently, latent diffusion models trained for 2D image synthesis have been turned into generative video models by inserting temporal layers and finetuning them on small, high-quality video datasets. However, training methods in the literature vary widely, and the field has yet to agree on a unified strategy for curating video data. In this paper, we identify and evaluate three different stages for successful training of video LDMs: text-to-image pretraining, video pretraining, and high-quality video finetuning. Furthermore, we demonstrate the necessity of a well-curated pretraining dataset for generating high-quality videos and present a systematic curation process to train a strong base model, including captioning and filtering strategies. We then explore the impact of finetuning our base model on high-quality data and train a text-to-video model that is competitive with closed-source video generation. We also show that our base model provides a powerful motion representation for downstream tasks such as image-to-video generation and adaptability to camera motion-specific LoRA modules. Finally, we demonstrate that our model provides a strong multi-view 3D-prior and can serve as a base to finetune a multi-view diffusion model that jointly generates multiple views of objects in a feedforward fashion, outperforming image-based methods at a fraction of their compute budget. We release code and model weights at https://github.com/Stability-AI/generative-models .) <|cite_end|> <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|>. LoRA has two main advantages compared to prompt tuning <|cite_start|> (Reference: Revisiting Parameter-Efficient Tuning: Are We Really There Yet?: Parameter-Efficient Tuning (PETuning) methods have been deemed by many as the new paradigm for using pretrained language models (PLMs). By tuning just a fraction amount of parameters comparing to full model finetuning, PETuning methods claim to have achieved performance on par with or even better than finetuning. In this work, we take a step back and re-examine these PETuning methods by conducting the first comprehensive investigation into the training and evaluation of them. We found the problematic validation and testing practice in current studies, when accompanied by the instability nature of PETuning methods, has led to unreliable conclusions. When being compared under a truly fair evaluation protocol, PETuning cannot yield consistently competitive performance while finetuning remains to be the best-performing method in medium- and high-resource settings. We delve deeper into the cause of the instability and observed that the number of trainable parameters and training iterations are two main factors: reducing trainable parameters and prolonging training iterations may lead to higher stability in PETuning methods.) <|cite_end|>. Firstly, LoRA is generally more effective in low-resource (limited data availability) settings. During test-time adaptation, we have only a single unlabeled test sample available to update the model. Moreover, to minimize the overall time required for inference, only a very limited number of model updates (typically only one) are possible during test-time. Again, LoRA is known to be more stable than prompts in this scenario. It must be highlighted that our work marks \textit{the first exploration of LoRA for test-time adaptation based on a single test sample for zero-shot generalization}.
Additionally, we introduce a confidence maximization objective that replaces the conventional entropy loss used in <|cite_start|> (Reference: Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models: Pre-trained vision-language models (e.g., CLIP) have shown promising zero-shot generalization in many downstream tasks with properly designed text prompts. Instead of relying on hand-engineered prompts, recent works learn prompts using the training data from downstream tasks. While effective, training on domain-specific data reduces a model's generalization capability to unseen new domains. In this work, we propose test-time prompt tuning (TPT), a method that can learn adaptive prompts on the fly with a single test sample. For image classification, TPT optimizes the prompt by minimizing the entropy with confidence selection so that the model has consistent predictions across different augmented views of each test sample. In evaluating generalization to natural distribution shifts, TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average, surpassing previous prompt tuning approaches that require additional task-specific training data. In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data. Project page: https://azshue.github.io/TPT.) <|cite_end|> <|cite_start|> (Reference: Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization: The promising zero-shot generalization of vision-language models such as CLIP has led to their adoption using prompt learning for numerous downstream tasks. Previous works have shown test-time prompt tuning using entropy minimization to adapt text prompts for unseen domains. While effective, this overlooks the key cause for performance degradation to unseen domains -- distribution shift. In this work, we explicitly handle this problem by aligning the out-of-distribution (OOD) test sample statistics to those of the source data using prompt tuning. We use a single test sample to adapt multi-modal prompts at test time by minimizing the feature distribution shift to bridge the gap in the test domain. Evaluating against the domain generalization benchmark, our method improves zero-shot top- 1 accuracy beyond existing prompt-learning techniques, with a 3.08% improvement over the baseline MaPLe. In cross-dataset generalization with unseen categories across 10 datasets, our method improves consistently across all datasets compared to the existing state-of-the-art. Our source code and models are available at https://jameelhassan.github.io/promptalign.) <|cite_end|> with a new weighted entropy loss. Existing studies <|cite_start|> (Reference: Recognition in Terra Incognita: It is desirable for detection and classification algorithms to generalize to unfamiliar environments, but suitable benchmarks for quantitatively studying this phenomenon are not yet available. We present a dataset designed to measure recognition generalization to novel environments. The images in our dataset are harvested from twenty camera traps deployed to monitor animal populations. Camera traps are fixed at one location, hence the background changes little across images; capture is triggered automatically, hence there is no human bias. The challenge is learning recognition in a handful of locations, and generalizing animal detection and classification to new locations where no training data is available. In our experiments state-of-the-art algorithms show excellent performance when tested at the same location where they were trained. However, we find that generalization to new locations is poor, especially for classification systems.) <|cite_end|> <|cite_start|> (Reference: Shortcut Learning in Deep Neural Networks: Deep learning has triggered the current rise of artificial intelligence and is the workhorse of today's machine intelligence. Numerous success stories have rapidly spread all over science, industry and society, but its limitations have only recently come into focus. In this perspective we seek to distill how many of deep learning's problems can be seen as different symptoms of the same underlying problem: shortcut learning. Shortcuts are decision rules that perform well on standard benchmarks but fail to transfer to more challenging testing conditions, such as real-world scenarios. Related issues are known in Comparative Psychology, Education and Linguistics, suggesting that shortcut learning may be a common characteristic of learning systems, biological and artificial alike. Based on these observations, we develop a set of recommendations for model interpretation and benchmarking, highlighting recent advances in machine learning to improve robustness and transferability from the lab to real-world applications.) <|cite_end|> <|cite_start|> (Reference: Examining and Combating Spurious Features under Distribution Shift: A central goal of machine learning is to learn robust representations that capture the causal relationship between inputs features and output labels. However, minimizing empirical risk over finite or biased datasets often results in models latching on to spurious correlations between the training input/output pairs that are not fundamental to the problem at hand. In this paper, we define and analyze robust and spurious representations using the information-theoretic concept of minimal sufficient statistics. We prove that even when there is only bias of the input distribution (i.e. covariate shift), models can still pick up spurious features from their training data. Group distributionally robust optimization (DRO) provides an effective tool to alleviate covariate shift by minimizing the worst-case training loss over a set of pre-defined groups. Inspired by our analysis, we demonstrate that group DRO can fail when groups do not directly account for various spurious correlations that occur in the data. To address this, we further propose to minimize the worst-case losses over a more flexible set of distributions that are defined on the joint distribution of groups and instances, instead of treating each group as a whole at optimization time. Through extensive experiments on one image and two language tasks, we show that our model is significantly more robust than comparable baselines under various partitions. Our code is available at https://github.com/violet-zct/group-conditional-DRO.) <|cite_end|> highlight the tendency of deep neural networks to leverage both spurious and semantically meaningful features, leading to diminished performance when spurious correlations are prevalent. Hence, relying solely on entropy for confidence estimation may not be consistently reliable under distribution shifts, as it cannot distinguish whether the model is focusing on spurious features. As shown in Figure \ref{fig:entropy_octiles}, a low entropy value is not a guarantee for correct prediction. Hence, in this work, \textit{we propose a weighted entropy loss that assigns relative weights to the different augmentations, while encouraging consistent high-confidence predictions for these augmentations}. Through empirical validation, we demonstrate the sub-optimality of using standard entropy loss to update parameters at test-time and showcase the advantages of optimizing our weighted entropy loss.
In summary, our contributions are as follows:
\vspace{-0.20cm}
\begin{itemize}[left=0pt]
\item We introduce \textbf{T}est-\textbf{T}ime \textbf{L}oRA (\textbf{TTL}), a parameter-efficient scheme for low-rank adaptation of VLMs at test-time without relying on source data statistics or pre-trained prompts.
\vspace{-0.20cm}
\item We propose a weighted entropy loss that introduces a confidence maximization objective for updating parameters at test-time, showcasing its superior performance compared to the conventional entropy loss.
\vspace{-0.20cm}
\item We conduct extensive experiments and show that TTL achieves 7.49\% improvement on average over the baseline CLIP and 2.11\% over the best baseline for domain generalization. For cross-dataset transfer, TTL exhibits 1.40\% improvement over the baseline.
\end{itemize}
Related Work
\noindent\textbf{Test-Time Adaptation (TTA)}:
TTA <|cite_start|> (Reference: Test-Time Training with Self-Supervision for Generalization under Distribution Shifts: In this paper, we propose Test-Time Training, a general approach for improving the performance of predictive models when training and test data come from different distributions. We turn a single unlabeled test sample into a self-supervised learning problem, on which we update the model parameters before making a prediction. This also extends naturally to data in an online stream. Our simple approach leads to improvements on diverse image classification benchmarks aimed at evaluating robustness to distribution shifts.) <|cite_end|> <|cite_start|> (Reference: Tent: Fully Test-time Adaptation by Entropy Minimization: A model must adapt itself to generalize to new and different data during testing. In this setting of fully test-time adaptation the model has only the test data and its own parameters. We propose to adapt by test entropy minimization (tent): we optimize the model for confidence as measured by the entropy of its predictions. Our method estimates normalization statistics and optimizes channel-wise affine transformations to update online on each batch. Tent reduces generalization error for image classification on corrupted ImageNet and CIFAR-10/100 and reaches a new state-of-the-art error on ImageNet-C. Tent handles source-free domain adaptation on digit recognition from SVHN to MNIST/MNIST-M/USPS, on semantic segmentation from GTA to Cityscapes, and on the VisDA-C benchmark. These results are achieved in one epoch of test-time optimization without altering training.) <|cite_end|> <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> aims to bridge the distribution gap between the train and test data distributions at test time.
While TPT <|cite_start|> (Reference: Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models: Pre-trained vision-language models (e.g., CLIP) have shown promising zero-shot generalization in many downstream tasks with properly designed text prompts. Instead of relying on hand-engineered prompts, recent works learn prompts using the training data from downstream tasks. While effective, training on domain-specific data reduces a model's generalization capability to unseen new domains. In this work, we propose test-time prompt tuning (TPT), a method that can learn adaptive prompts on the fly with a single test sample. For image classification, TPT optimizes the prompt by minimizing the entropy with confidence selection so that the model has consistent predictions across different augmented views of each test sample. In evaluating generalization to natural distribution shifts, TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average, surpassing previous prompt tuning approaches that require additional task-specific training data. In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data. Project page: https://azshue.github.io/TPT.) <|cite_end|> and CALIP <|cite_start|> (Reference: CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention: Contrastive Language-Image Pre-training (CLIP) has been shown to learn visual representations with great transferability, which achieves promising accuracy for zero-shot classification. To further improve its downstream performance, existing works propose additional learnable modules upon CLIP and fine-tune them by few-shot training sets. However, the resulting extra training cost and data requirement severely hinder the efficiency for model deployment and knowledge transfer. In this paper, we introduce a free-lunch enhancement method, CALIP, to boost CLIP's zero-shot performance via a parameter-free Attention module. Specifically, we guide visual and textual representations to interact with each other and explore cross-modal informative features via attention. As the pre-training has largely reduced the embedding distances between two modalities, we discard all learnable parameters in the attention and bidirectionally update the multi-modal features, enabling the whole process to be parameter-free and training-free. In this way, the images are blended with textual-aware signals and the text representations become visual-guided for better adaptive zero-shot alignment. We evaluate CALIP on various benchmarks of 14 datasets for both 2D image and 3D point cloud few-shot classification, showing consistent zero-shot performance improvement over CLIP. Based on that, we further insert a small number of linear layers in CALIP's attention module and verify our robustness under the few-shot settings, which also achieves leading performance compared to existing methods. Those extensive experiments demonstrate the superiority of our approach for efficient enhancement of CLIP.) <|cite_end|> first explored zero-shot enhancement of pre-trained VLMs, TPT relies on test-time prompt tuning, struggling with explicit alignment of pre-training and test data distributions. CALIP utilizes a parameter-free attention module for cross-modal features. PromptAlign <|cite_start|> (Reference: Align Your Prompts: Test-Time Prompting with Distribution Alignment for Zero-Shot Generalization: The promising zero-shot generalization of vision-language models such as CLIP has led to their adoption using prompt learning for numerous downstream tasks. Previous works have shown test-time prompt tuning using entropy minimization to adapt text prompts for unseen domains. While effective, this overlooks the key cause for performance degradation to unseen domains -- distribution shift. In this work, we explicitly handle this problem by aligning the out-of-distribution (OOD) test sample statistics to those of the source data using prompt tuning. We use a single test sample to adapt multi-modal prompts at test time by minimizing the feature distribution shift to bridge the gap in the test domain. Evaluating against the domain generalization benchmark, our method improves zero-shot top- 1 accuracy beyond existing prompt-learning techniques, with a 3.08% improvement over the baseline MaPLe. In cross-dataset generalization with unseen categories across 10 datasets, our method improves consistently across all datasets compared to the existing state-of-the-art. Our source code and models are available at https://jameelhassan.github.io/promptalign.) <|cite_end|> builds on TPT and aligns distribution statistics by pre-training the learnable prompts using training data, deviating from the \textit{strict zero-shot} assumption. DiffTPT <|cite_start|> (Reference: Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning: Benefiting from prompt tuning, recent years have witnessed the promising performance of pre-trained vision-language models, e.g., CLIP, on versatile downstream tasks. In this paper, we focus on a particular setting of learning adaptive prompts on the fly for each test sample from an unseen new domain, which is known as test-time prompt tuning (TPT). Existing TPT methods typically rely on data augmentation and confidence selection. However, conventional data augmentation techniques, e.g., random resized crops, suffers from the lack of data diversity, while entropy-based confidence selection alone is not sufficient to guarantee prediction fidelity. To address these issues, we propose a novel TPT method, named DiffTPT, which leverages pre-trained diffusion models to generate diverse and informative new data. Specifically, we incorporate augmented data by both conventional method and pre-trained stable diffusion to exploit their respective merits, improving the models ability to adapt to unknown new test data. Moreover, to ensure the prediction fidelity of generated data, we introduce a cosine similarity-based filtration technique to select the generated data with higher similarity to the single test sample. Our experiments on test datasets with distribution shifts and unseen categories demonstrate that DiffTPT improves the zero-shot accuracy by an average of 5.13\% compared to the state-of-the-art TPT method. Our code and models will be publicly released.) <|cite_end|> employs an external diffusion model for diverse data augmentation but is impractical due to complexity of dependence on external diffusion model. In contrast, our approach efficiently updates model parameters in a single step, focusing on adapting the visual encoder of CLIP with out-of-distribution samples at test time, without relying on pre-trained weights or external sources.
\vspace{0.10cm}
\noindent\textbf{Fine-tuning for Large Vision-Language Models}:
Having been pre-trained in a self-supervised manner on vast image-text pairs, VLMs like CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|> and ALIGN <|cite_start|> (Reference: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision: Pre-trained representations are becoming crucial for many NLP and perception tasks. While representation learning in NLP has transitioned to training on raw text without human annotations, visual and vision-language representations still rely heavily on curated training datasets that are expensive or require expert knowledge. For vision applications, representations are mostly learned using datasets with explicit class labels such as ImageNet or OpenImages. For vision-language, popular datasets like Conceptual Captions, MSCOCO, or CLIP all involve a non-trivial data collection (and cleaning) process. This costly curation process limits the size of datasets and hence hinders the scaling of trained models. In this paper, we leverage a noisy dataset of over one billion image alt-text pairs, obtained without expensive filtering or post-processing steps in the Conceptual Captions dataset. A simple dual-encoder architecture learns to align visual and language representations of the image and text pairs using a contrastive loss. We show that the scale of our corpus can make up for its noise and leads to state-of-the-art representations even with such a simple learning scheme. Our visual representation achieves strong performance when transferred to classification tasks such as ImageNet and VTAB. The aligned visual and language representations enables zero-shot image classification and also set new state-of-the-art results on Flickr30K and MSCOCO image-text retrieval benchmarks, even when compared with more sophisticated cross-attention models. The representations also enable cross-modality search with complex text and text + image queries.) <|cite_end|> have demonstrated good generalizability. However, efficiently adapting them to downstream tasks with limited data remains challenging. CoOp <|cite_start|> (Reference: Learning to Prompt for Vision-Language Models: Large pre-trained vision-language models like CLIP have shown great potential in learning representations that are transferable across a wide range of downstream tasks. Different from the traditional representation learning that is based mostly on discretized labels, vision-language pre-training aligns images and texts in a common feature space, which allows zero-shot transfer to a downstream task via prompting, i.e., classification weights are synthesized from natural language describing classes of interest. In this work, we show that a major challenge for deploying such models in practice is prompt engineering, which requires domain expertise and is extremely time-consuming -- one needs to spend a significant amount of time on words tuning since a slight change in wording could have a huge impact on performance. Inspired by recent advances in prompt learning research in natural language processing (NLP), we propose Context Optimization (CoOp), a simple approach specifically for adapting CLIP-like vision-language models for downstream image recognition. Concretely, CoOp models a prompt's context words with learnable vectors while the entire pre-trained parameters are kept fixed. To handle different image recognition tasks, we provide two implementations of CoOp: unified context and class-specific context. Through extensive experiments on 11 datasets, we demonstrate that CoOp requires as few as one or two shots to beat hand-crafted prompts with a decent margin and is able to gain significant improvements over prompt engineering with more shots, e.g., with 16 shots the average gain is around 15% (with the highest reaching over 45%). Despite being a learning-based approach, CoOp achieves superb domain generalization performance compared with the zero-shot model using hand-crafted prompts.) <|cite_end|> proposes to fine-tune CLIP by learning a set of prompts in the text encoder. CoCoOp <|cite_start|> (Reference: Conditional Prompt Learning for Vision-Language Models: With the rise of powerful pre-trained vision-language models like CLIP, it becomes essential to investigate ways to adapt these models to downstream datasets. A recently proposed method named Context Optimization (CoOp) introduces the concept of prompt learning -- a recent trend in NLP -- to the vision domain for adapting pre-trained vision-language models. Specifically, CoOp turns context words in a prompt into a set of learnable vectors and, with only a few labeled images for learning, can achieve huge improvements over intensively-tuned manual prompts. In our study we identify a critical problem of CoOp: the learned context is not generalizable to wider unseen classes within the same dataset, suggesting that CoOp overfits base classes observed during training. To address the problem, we propose Conditional Context Optimization (CoCoOp), which extends CoOp by further learning a lightweight neural network to generate for each image an input-conditional token (vector). Compared to CoOp's static prompts, our dynamic prompts adapt to each instance and are thus less sensitive to class shift. Extensive experiments show that CoCoOp generalizes much better than CoOp to unseen classes, even showing promising transferability beyond a single dataset; and yields stronger domain generalization performance as well. Code is available at https://github.com/KaiyangZhou/CoOp.) <|cite_end|> highlights the inferior generalization capability of CoOp and conditions the text prompt tokens on image embeddings on the fly. MaPLe <|cite_start|> (Reference: MaPLe: Multi-modal Prompt Learning: Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.) <|cite_end|> jointly learns deep prompts at both vision and text encoders. Despite these advancements, existing methods often rely on pre-trained weights, posing challenges in real-world scenarios where no such training data from the target domain is available. In contrast, our work utilizes LoRA <|cite_start|> (Reference: LoRA: Low-Rank Adaptation of Large Language Models: An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.) <|cite_end|>, initialized from scratch, to adapt attention layers of the visual encoder at test time for addressing distribution shifts in out-of-domain recognition tasks.
\vspace{0.10cm}
\noindent\textbf{Entropy Minimization}:
The primary challenge of TTA is limited access to samples from the test dataset during online updates, which causes error accumulation. To mitigate this issue, TTA methods have utilized the entropy of model predictions as a confidence metric. TPT <|cite_start|> (Reference: Test-Time Prompt Tuning for Zero-Shot Generalization in Vision-Language Models: Pre-trained vision-language models (e.g., CLIP) have shown promising zero-shot generalization in many downstream tasks with properly designed text prompts. Instead of relying on hand-engineered prompts, recent works learn prompts using the training data from downstream tasks. While effective, training on domain-specific data reduces a model's generalization capability to unseen new domains. In this work, we propose test-time prompt tuning (TPT), a method that can learn adaptive prompts on the fly with a single test sample. For image classification, TPT optimizes the prompt by minimizing the entropy with confidence selection so that the model has consistent predictions across different augmented views of each test sample. In evaluating generalization to natural distribution shifts, TPT improves the zero-shot top-1 accuracy of CLIP by 3.6% on average, surpassing previous prompt tuning approaches that require additional task-specific training data. In evaluating cross-dataset generalization with unseen categories, TPT performs on par with the state-of-the-art approaches that use additional training data. Project page: https://azshue.github.io/TPT.) <|cite_end|> attempts to select the augmented samples that have minimum entropy. The need to have a batch of samples by generating multiple views via augmentations at test time is eliminated in <|cite_start|> (Reference: MEMO: Test Time Robustness via Adaptation and Augmentation: While deep neural networks can attain good accuracy on in-distribution test points, many applications require robustness even in the face of unexpected perturbations in the input, changes in the domain, or other sources of distribution shift. We study the problem of test time robustification, i.e., using the test input to improve model robustness. Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions, such as access to multiple test points, that prevent widespread adoption. In this work, we aim to study and devise methods that make no assumptions about the model training process and are broadly applicable at test time. We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable: when presented with a test example, perform different data augmentations on the data point, and then adapt (all of) the model parameters by minimizing the entropy of the model's average, or marginal, output distribution across the augmentations. Intuitively, this objective encourages the model to make the same prediction across different augmentations, thus enforcing the invariances encoded in these augmentations, while also maintaining confidence in its predictions. In our experiments, we evaluate two baseline ResNet models, two robust ResNet-50 models, and a robust vision transformer model, and we demonstrate that this approach achieves accuracy gains of 1-8\% over standard model evaluation and also generally outperforms prior augmentation and adaptation strategies. For the setting in which only one test point is available, we achieve state-of-the-art results on the ImageNet-C, ImageNet-R, and, among ResNet-50 models, ImageNet-A distribution shift benchmarks.) <|cite_end|>. Motivated by TENT <|cite_start|> (Reference: Tent: Fully Test-time Adaptation by Entropy Minimization: A model must adapt itself to generalize to new and different data during testing. In this setting of fully test-time adaptation the model has only the test data and its own parameters. We propose to adapt by test entropy minimization (tent): we optimize the model for confidence as measured by the entropy of its predictions. Our method estimates normalization statistics and optimizes channel-wise affine transformations to update online on each batch. Tent reduces generalization error for image classification on corrupted ImageNet and CIFAR-10/100 and reaches a new state-of-the-art error on ImageNet-C. Tent handles source-free domain adaptation on digit recognition from SVHN to MNIST/MNIST-M/USPS, on semantic segmentation from GTA to Cityscapes, and on the VisDA-C benchmark. These results are achieved in one epoch of test-time optimization without altering training.) <|cite_end|> and EATA <|cite_start|> (Reference: Efficient Test-Time Model Adaptation without Forgetting: Test-time adaptation (TTA) seeks to tackle potential distribution shifts between training and testing data by adapting a given model w.r.t. any testing sample. This task is particularly important for deep models when the test environment changes frequently. Although some recent attempts have been made to handle this task, we still face two practical challenges: 1) existing methods have to perform backward computation for each test sample, resulting in unbearable prediction cost to many applications; 2) while existing TTA solutions can significantly improve the test performance on out-of-distribution data, they often suffer from severe performance degradation on in-distribution data after TTA (known as catastrophic forgetting). In this paper, we point out that not all the test samples contribute equally to model adaptation, and high-entropy ones may lead to noisy gradients that could disrupt the model. Motivated by this, we propose an active sample selection criterion to identify reliable and non-redundant samples, on which the model is updated to minimize the entropy loss for test-time adaptation. Furthermore, to alleviate the forgetting issue, we introduce a Fisher regularizer to constrain important model parameters from drastic changes, where the Fisher importance is estimated from test samples with generated pseudo labels. Extensive experiments on CIFAR-10-C, ImageNet-C, and ImageNet-R verify the effectiveness of our proposed method.) <|cite_end|>, recently <|cite_start|> (Reference: Entropy is not Enough for Test-Time Adaptation: From the Perspective of Disentangled Factors: Test-time adaptation (TTA) fine-tunes pre-trained deep neural networks for unseen test data. The primary challenge of TTA is limited access to the entire test dataset during online updates, causing error accumulation. To mitigate it, TTA methods have utilized the model output's entropy as a confidence metric that aims to determine which samples have a lower likelihood of causing error. Through experimental studies, however, we observed the unreliability of entropy as a confidence metric for TTA under biased scenarios and theoretically revealed that it stems from the neglect of the influence of latent disentangled factors of data on predictions. Building upon these findings, we introduce a novel TTA method named Destroy Your Object (DeYO), which leverages a newly proposed confidence metric named Pseudo-Label Probability Difference (PLPD). PLPD quantifies the influence of the shape of an object on prediction by measuring the difference between predictions before and after applying an object-destructive transformation. DeYO consists of sample selection and sample weighting, which employ entropy and PLPD concurrently. For robust adaptation, DeYO prioritizes samples that dominantly incorporate shape information when making predictions. Our extensive experiments demonstrate the consistent superiority of DeYO over baseline methods across various scenarios, including biased and wild. Project page is publicly available at https://whitesnowdrop.github.io/DeYO/.) <|cite_end|> shows that entropy alone as a measure of confidence is insufficient for TTA, and propose DeYO which leverages a confidence metric called PLPD and entropy together. While effective for natural datasets, the cross-dataset performance is still a unresolved problem, which we attempt to solve using the weighted entropy loss. <|paper_end|> | [
"<|reference_start|> Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning: Benefiting from prompt tuning, recent years have witnessed the promising performance of pre-trained vision-language models, e.g., CLIP, on versatile downstream tasks. In this paper, we focus on a particular setting of learning adaptive prompts on the fly for each test sample from an unseen new domain, which is known as test-time prompt tuning (TPT). Existing TPT methods typically rely on data augmentation and confidence selection. However, conventional data augmentation techniques, e.g., random resized crops, suffers from the lack of data diversity, while entropy-based confidence selection alone is not sufficient to guarantee prediction fidelity. To address these issues, we propose a novel TPT method, named DiffTPT, which leverages pre-trained diffusion models to generate diverse and informative new data. Specifically, we incorporate augmented data by both conventional method and pre-trained stable diffusion to exploit their respective merits, improving the models ability to adapt to unknown new test data. Moreover, to ensure the prediction fidelity of generated data, we introduce a cosine similarity-based filtration technique to select the generated data with higher similarity to the single test sample. Our experiments on test datasets with distribution shifts and unseen categories demonstrate that DiffTPT improves the zero-shot accuracy by an average of 5.13\\% compared to the state-of-the-art TPT method. Our code and models will be publicly released. <|reference_end|>",
"<|reference_start|> LoRA: Low-Rank Adaptation of Large Language Models: An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA. <|reference_end|>",
"<|reference_start|> In Advances in Neural Information Processing Systems: <|reference_end|>",
"<|reference_start|> CALIP: Zero-Shot Enhancement of CLIP with Parameter-free Attention: Contrastive Language-Image Pre-training (CLIP) has been shown to learn visual representations with great transferability, which achieves promising accuracy for zero-shot classification. To further improve its downstream performance, existing works propose additional learnable modules upon CLIP and fine-tune them by few-shot training sets. However, the resulting extra training cost and data requirement severely hinder the efficiency for model deployment and knowledge transfer. In this paper, we introduce a free-lunch enhancement method, CALIP, to boost CLIP's zero-shot performance via a parameter-free Attention module. Specifically, we guide visual and textual representations to interact with each other and explore cross-modal informative features via attention. As the pre-training has largely reduced the embedding distances between two modalities, we discard all learnable parameters in the attention and bidirectionally update the multi-modal features, enabling the whole process to be parameter-free and training-free. In this way, the images are blended with textual-aware signals and the text representations become visual-guided for better adaptive zero-shot alignment. We evaluate CALIP on various benchmarks of 14 datasets for both 2D image and 3D point cloud few-shot classification, showing consistent zero-shot performance improvement over CLIP. Based on that, we further insert a small number of linear layers in CALIP's attention module and verify our robustness under the few-shot settings, which also achieves leading performance compared to existing methods. Those extensive experiments demonstrate the superiority of our approach for efficient enhancement of CLIP. <|reference_end|>"
] | [
13,
15,
25,
36
] | {"<|cite_1|>": "arxiv-414701", "<|multi_cite_2_1|>": "arxiv-400463", "<|multi_cite_2_2|>": "arxiv-364803", "<|multi_cite_3_1|>": "arxiv-364496", "<|multi_cite_3_2|>": "arxiv-404771", "<|multi_cite_3_3|>": "arxiv-573233", "<|cite_4|>": "arxiv-407638", "<|multi_cite_5_1|>": "arxiv-451673", "<|multi_cite_5_2|>": "arxiv-523062", "<|cite_6|>": "arxiv-323919", "<|cite_7|>": "arxiv-446543", "<|cite_8|>": "arxiv-555226", "<|multi_cite_9_1|>": "arxiv-446543", "<|multi_cite_9_2|>": "arxiv-530441", "<|multi_cite_9_3|>": "arxiv-555226", "<|cite_10|>": "arxiv-349236", "<|multi_cite_11_1|>": "ss-832115", "<|multi_cite_11_2|>": "arxiv-527916", "<|multi_cite_11_3|>": "arxiv-548967", "<|multi_cite_11_4|>": "ss-680625", "<|multi_cite_11_5|>": "arxiv-493412", "<|multi_cite_11_6|>": "arxiv-525276", "<|multi_cite_11_7|>": "ss-832115", "<|multi_cite_11_8|>": "arxiv-522093", "<|multi_cite_11_9|>": "arxiv-561807", "<|multi_cite_11_10|>": "ss-832115", "<|cite_12|>": "arxiv-399534", "<|multi_cite_13_1|>": "arxiv-446543", "<|multi_cite_13_2|>": "arxiv-555226", "<|multi_cite_14_1|>": "arxiv-165786", "<|multi_cite_14_2|>": "arxiv-259752", "<|multi_cite_14_3|>": "arxiv-348083", "<|multi_cite_15_1|>": "arxiv-226190", "<|multi_cite_15_2|>": "arxiv-272955", "<|multi_cite_15_3|>": "ss-832115", "<|cite_16|>": "arxiv-446543", "<|cite_17|>": "arxiv-449547", "<|cite_18|>": "arxiv-555226", "<|cite_19|>": "arxiv-530441", "<|cite_20|>": "arxiv-323919", "<|cite_21|>": "arxiv-320496", "<|cite_22|>": "arxiv-364496", "<|cite_23|>": "arxiv-404771", "<|cite_24|>": "arxiv-451673", "<|cite_25|>": "arxiv-349236", "<|cite_26|>": "arxiv-446543", "<|cite_27|>": "arxiv-375150", "<|cite_28|>": "arxiv-272955", "<|cite_29|>": "arxiv-411220", "<|cite_30|>": "arxiv-594549"} |
2111.07774-0 | <|paper_start|> Title: D^2Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos
Abstract: D^2Conv3D: Dynamic Dilated Convolutions for Object Segmentation in Videos: Despite receiving significant attention from the research community, the task of segmenting and tracking objects in monocular videos still has much room for improvement. Existing works have simultaneously justified the efficacy of dilated and deformable convolutions for various image-level segmentation tasks. This gives reason to believe that 3D extensions of such convolutions should also yield performance improvements for video-level segmentation tasks. However, this aspect has not yet been explored thoroughly in existing literature. In this paper, we propose Dynamic Dilated Convolutions (D^2Conv3D): a novel type of convolution which draws inspiration from dilated and deformable convolutions and extends them to the 3D (spatio-temporal) domain. We experimentally show that D^2Conv3D can be used to improve the performance of multiple 3D CNN architectures across multiple video segmentation related benchmarks by simply employing D^2Conv3D as a drop-in replacement for standard convolutions. We further show that D^2Conv3D out-performs trivial extensions of existing dilated and deformable convolutions to 3D. Lastly, we set a new state-of-the-art on the DAVIS 2016 Unsupervised Video Object Segmentation benchmark. Code is made publicly available at https://github.com/Schmiddo/d2conv3d .
Introduction
The task of segmenting objects from monocular video sequences has received significant attention from the research community in recent years, mainly because of useful applications in self-driven cars, autonomous robots, \etc. Several existing approaches <|cite_start|> (Reference: PReMVOS: Proposal-generation, Refinement and Merging for Video Object Segmentation: We address semi-supervised video object segmentation, the task of automatically generating accurate and consistent pixel masks for objects in a video sequence, given the first-frame ground truth annotations. Towards this goal, we present the PReMVOS algorithm (Proposal-generation, Refinement and Merging for Video Object Segmentation). Our method separates this problem into two steps, first generating a set of accurate object segmentation mask proposals for each video frame and then selecting and merging these proposals into accurate and temporally consistent pixel-wise object tracks over a video sequence in a way which is designed to specifically tackle the difficult challenges involved with segmenting multiple objects across a video sequence. Our approach surpasses all previous state-of-the-art results on the DAVIS 2017 video object segmentation benchmark with a J & F mean score of 71.6 on the test-dev dataset, and achieves first place in both the DAVIS 2018 Video Object Segmentation Challenge and the YouTube-VOS 1st Large-scale Video Object Segmentation Challenge.) <|cite_end|> <|cite_start|> (Reference: {UnOVOST: Unsupervised Offline Video Object Segmentation and Tracking for the 2019 Unsupervised DAVIS Challenge: We address Unsupervised Video Object Segmentation (UVOS), the task of automatically generating accurate pixel masks for salient objects in a video sequence and of tracking these objects consistently through time, without any information about which objects should be tracked. Towards solving this task, we present UnOVOST (Unsupervised Offline Video Object Segmentation and Tracking) as a simple and generic algorithm which is able to track a large variety of objects. This algorithm hierarchically builds up tracks in five stages. First, object proposal masks are generated using Mask R-CNN. Second, masks are sub-selected and clipped so that they do not overlap in the image domain. Third, tracklets are generated by grouping object proposals that are strongly temporally consistent with each other under optical flow warping. Fourth, tracklets are merged into long-term consistent object tracks using their temporal consistency and an appearance similarity metric calculated using an object re-identification network. Finally, the most salient object tracks are selected based on temporal track length and detection confidence scores. We evaluate our approach on the DAVIS 2017 Unsupervised dataset and obtain state-of-the-art performance with a mean J&F score of 58% on the test-dev benchmark. Our approach further achieves first place in the DAVIS 2019 Unsupervised Video Object Segmentation Challenge with a mean of J&F score of 56.4% on the test-challenge benchmark.) <|cite_end|> <|cite_start|> (Reference: Classifying, Segmenting, and Tracking Object Instances in Video with Mask Propagation: We introduce a method for simultaneously classifying, segmenting and tracking object instances in a video sequence. Our method, named MaskProp, adapts the popular Mask R-CNN to video by adding a mask propagation branch that propagates frame-level object instance masks from each video frame to all the other frames in a video clip. This allows our system to predict clip-level instance tracks with respect to the object instances segmented in the middle frame of the clip. Clip-level instance tracks generated densely for each frame in the sequence are finally aggregated to produce video-level object instance segmentation and classification. Our experiments demonstrate that our clip-level instance segmentation makes our approach robust to motion blur and object occlusions in video. MaskProp achieves the best reported accuracy on the YouTube-VIS dataset, outperforming the ICCV 2019 video instance segmentation challenge winner despite being much simpler and using orders of magnitude less labeled data (1.3M vs 1B images and 860K vs 14M bounding boxes).) <|cite_end|>for this task follow a two-step paradigm where objects are first segmented in individual image frames, followed by a second temporal association step.
Such methods leveraged the availability of accurate image-level instance segmentation networks <|cite_start|> (Reference: Learning to Refine Object Segments: Object segmentation requires both object-level information and low-level pixel data. This presents a challenge for feedforward networks: lower layers in convolutional nets capture rich spatial information, while upper layers encode object-level knowledge but are invariant to factors such as pose and appearance. In this work we propose to augment feedforward nets for object segmentation with a novel top-down refinement approach. The resulting bottom-up/top-down architecture is capable of efficiently generating high-fidelity object masks. Similarly to skip connections, our approach leverages features at all layers of the net. Unlike skip connections, our approach does not attempt to output independent predictions at each layer. Instead, we first output a coarse `mask encoding' in a feedforward pass, then refine this mask encoding in a top-down pass utilizing features at successively lower layers. The approach is simple, fast, and effective. Building on the recent DeepMask network for generating object proposals, we show accuracy improvements of 10-20% in average recall for various setups. Additionally, by optimizing the overall network architecture, our approach, which we call SharpMask, is 50% faster than the original DeepMask network (under .8s per image).) <|cite_end|> <|cite_start|> (Reference: Mask R-CNN: We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code has been made available at: https://github.com/facebookresearch/Detectron) <|cite_end|>and various cues for temporal association (\eg attention, optical flow, Re-ID) <|cite_start|> (Reference: PReMVOS: Proposal-generation, Refinement and Merging for Video Object Segmentation: We address semi-supervised video object segmentation, the task of automatically generating accurate and consistent pixel masks for objects in a video sequence, given the first-frame ground truth annotations. Towards this goal, we present the PReMVOS algorithm (Proposal-generation, Refinement and Merging for Video Object Segmentation). Our method separates this problem into two steps, first generating a set of accurate object segmentation mask proposals for each video frame and then selecting and merging these proposals into accurate and temporally consistent pixel-wise object tracks over a video sequence in a way which is designed to specifically tackle the difficult challenges involved with segmenting multiple objects across a video sequence. Our approach surpasses all previous state-of-the-art results on the DAVIS 2017 video object segmentation benchmark with a J & F mean score of 71.6 on the test-dev dataset, and achieves first place in both the DAVIS 2018 Video Object Segmentation Challenge and the YouTube-VOS 1st Large-scale Video Object Segmentation Challenge.) <|cite_end|> <|cite_start|> (Reference: {UnOVOST: Unsupervised Offline Video Object Segmentation and Tracking for the 2019 Unsupervised DAVIS Challenge: We address Unsupervised Video Object Segmentation (UVOS), the task of automatically generating accurate pixel masks for salient objects in a video sequence and of tracking these objects consistently through time, without any information about which objects should be tracked. Towards solving this task, we present UnOVOST (Unsupervised Offline Video Object Segmentation and Tracking) as a simple and generic algorithm which is able to track a large variety of objects. This algorithm hierarchically builds up tracks in five stages. First, object proposal masks are generated using Mask R-CNN. Second, masks are sub-selected and clipped so that they do not overlap in the image domain. Third, tracklets are generated by grouping object proposals that are strongly temporally consistent with each other under optical flow warping. Fourth, tracklets are merged into long-term consistent object tracks using their temporal consistency and an appearance similarity metric calculated using an object re-identification network. Finally, the most salient object tracks are selected based on temporal track length and detection confidence scores. We evaluate our approach on the DAVIS 2017 Unsupervised dataset and obtain state-of-the-art performance with a mean J&F score of 58% on the test-dev benchmark. Our approach further achieves first place in the DAVIS 2019 Unsupervised Video Object Segmentation Challenge with a mean of J&F score of 56.4% on the test-challenge benchmark.) <|cite_end|> <|cite_start|> (Reference: Anchor Diffusion for Unsupervised Video Object Segmentation: Unsupervised video object segmentation has often been tackled by methods based on recurrent neural networks and optical flow. Despite their complexity, these kinds of approaches tend to favour short-term temporal dependencies and are thus prone to accumulating inaccuracies, which cause drift over time. Moreover, simple (static) image segmentation models, alone, can perform competitively against these methods, which further suggests that the way temporal dependencies are modelled should be reconsidered. Motivated by these observations, in this paper we explore simple yet effective strategies to model long-term temporal dependencies. Inspired by the non-local operators of [70], we introduce a technique to establish dense correspondences between pixel embeddings of a reference "anchor" frame and the current one. This allows the learning of pairwise dependencies at arbitrarily long distances without conditioning on intermediate frames. Without online supervision, our approach can suppress the background and precisely segment the foreground object even in challenging scenarios, while maintaining consistent performance over time. With a mean IoU of $81.7\%$, our method ranks first on the DAVIS-2016 leaderboard of unsupervised methods, while still being competitive against state-of-the-art online semi-supervised approaches. We further evaluate our method on the FBMS dataset and the ViSal video saliency dataset, showing results competitive with the state of the art.) <|cite_end|> <|cite_start|> (Reference: {Reciprocal Transformations for Unsupervised Video Object Segmentation: Unsupervised video object segmentation (UVOS) aims at segmenting the primary objects in videos without any human intervention. Due to the lack of prior knowledge about the primary objects, identifying them from videos is the major challenge of UVOS. Previous methods often regard the moving objects as primary ones and rely on optical flow to capture the motion cues in videos, but the flow information alone is insufficient to distinguish the primary objects from the background objects that move together. This is because, when the noisy motion features are combined with the appearance features, the localization of the primary objects is misguided. To address this problem, we propose a novel reciprocal transformation network to discover primary objects by correlating three key factors: the intra-frame contrast, the motion cues, and temporal coherence of recurring objects. Each corresponds to a representative type of primary object, and our reciprocal mechanism enables an organic coordination of them to effectively remove ambiguous distractions from videos. Additionally, to exclude the information of the moving background objects from motion features, our transformation module enables to reciprocally transform the appearance features to enhance the motion features, so as to focus on the moving objects with salient appearance while removing the co-moving outliers. Experiments on the public benchmarks demonstrate that our model significantly outperforms the state-of-the-art methods. Code is available at https://github.com/OliverRensu/RTNet.) <|cite_end|>.
More recently however, methods have emerged <|cite_start|> (Reference: E3D: An efficient 3D CNN for the recognition of dairy cow's basic motion behavior: ) <|cite_end|> <|cite_start|> (Reference: STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos: Existing methods for instance segmentation in videos typically involve multi-stage pipelines that follow the tracking-by-detection paradigm and model a video clip as a sequence of images. Multiple networks are used to detect objects in individual frames, and then associate these detections over time. Hence, these methods are often non-end-to-end trainable and highly tailored to specific tasks. In this paper, we propose a different approach that is well-suited to a variety of tasks involving instance segmentation in videos. In particular, we model a video clip as a single 3D spatio-temporal volume, and propose a novel approach that segments and tracks instances across space and time in a single stage. Our problem formulation is centered around the idea of spatio-temporal embeddings which are trained to cluster pixels belonging to a specific object instance over an entire video clip. To this end, we introduce (i) novel mixing functions that enhance the feature representation of spatio-temporal embeddings, and (ii) a single-stage, proposal-free network that can reason about temporal context. Our network is trained end-to-end to learn spatio-temporal embeddings as well as parameters required to cluster these embeddings, thus simplifying inference. Our method achieves state-of-the-art results across multiple datasets and tasks. Code and models are available at https://github.com/sabarim/STEm-Seg.) <|cite_end|> <|cite_start|> (Reference: Making a Case for 3D Convolutions for Object Segmentation in Videos: The task of object segmentation in videos is usually accomplished by processing appearance and motion information separately using standard 2D convolutional networks, followed by a learned fusion of the two sources of information. On the other hand, 3D convolutional networks have been successfully applied for video classification tasks, but have not been leveraged as effectively to problems involving dense per-pixel interpretation of videos compared to their 2D convolutional counterparts and lag behind the aforementioned networks in terms of performance. In this work, we show that 3D CNNs can be effectively applied to dense video prediction tasks such as salient object segmentation. We propose a simple yet effective encoder-decoder network architecture consisting entirely of 3D convolutions that can be trained end-to-end using a standard cross-entropy loss. To this end, we leverage an efficient 3D encoder, and propose a 3D decoder architecture, that comprises novel 3D Global Convolution layers and 3D Refinement modules. Our approach outperforms existing state-of-the-arts by a large margin on the DAVIS'16 Unsupervised, FBMS and ViSal dataset benchmarks in addition to being faster, thus showing that our architecture can efficiently learn expressive spatio-temporal features and produce high quality video segmentation masks. We have made our code and trained models publicly available at https://github.com/sabarim/3DC-Seg.) <|cite_end|>which use 3D convolutions to jointly reason over spatial and temporal dimensions, resulting in improved performance for various video object segmentation related tasks.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{figures/conv_comparison_v9.png}
\caption{
Comparison of regular convolutions (left), modulated deformable convolutions <|cite_start|> (Reference: Deformable ConvNets v2: More Deformable, Better Results: The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content. To address this problem, we present a reformulation of Deformable ConvNets that improves its ability to focus on pertinent image regions, through increased modeling power and stronger training. The modeling power is enhanced through a more comprehensive integration of deformable convolution within the network, and by introducing a modulation mechanism that expands the scope of deformation modeling. To effectively harness this enriched modeling capability, we guide network training via a proposed feature mimicking scheme that helps the network to learn features that reflect the object focus and classification power of R-CNN features. With the proposed contributions, this new version of Deformable ConvNets yields significant performance gains over the original model and produces leading results on the COCO benchmark for object detection and instance segmentation.) <|cite_end|>(middle), and \OurConvName (right).
Note that \OurConvName predicts a distinct spatiotemporal dilation for every point in the volume.
Different colors indicate different modulation values.
}
\label{fig:conv_comparison}
\end{figure}
In parallel to the aforementioned developments in the video domain, another research area in computer vision was focusing on improving the performance of image-level segmentation networks.
To this end, one line of reasoning considers the limited receptive field of convolution operations as a drawback and aims to mitigate it.
Even though a restricted receptive field is useful for weight sharing and imparting translation invariance, it is also a limitation for dense segmentation tasks where a wider view of the feature map can be beneficial.
Chen~\etal published a series of works <|cite_start|> (Reference: DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs: In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.) <|cite_end|> <|cite_start|> (Reference: Rethinking Atrous Convolution for Semantic Image Segmentation: In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed `DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.) <|cite_end|> <|cite_start|> (Reference: Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation: Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0\% and 82.1\% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at \url{https://github.com/tensorflow/models/tree/master/research/deeplab}.) <|cite_end|>that use \emph{atrous convolutions} (also called \emph{dilated convolutions}) for semantic segmentation in images. Dilated convolutions effectively add padded zeros between the values in the convolutional kernel, thus enlarging the receptive field without incurring computational overhead or increasing the parameter count. Chen~\etal argued that the high degree of spatial downsampling usually applied in CNNs is detrimental for dense segmentation tasks. They instead maintained feature maps at a higher resolution and used dilated convolutions to capture a larger receptive field.
Another method for enhancing the receptive field of convolutions is the idea of \textit{deformable convolutions} <|cite_start|> (Reference: Deformable Convolutional Networks: Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the effectiveness of our approach on sophisticated vision tasks of object detection and semantic segmentation. The code would be released.) <|cite_end|>(DCNv1).
Here, the convolutional kernel can be arbitrarily shaped depending on the input feature map, as opposed to being a regular grid as in standard or dilated convolutions. Practically, this is realized by using the input feature map to predict offsets (or \emph{deformations}) to the sampling locations of the convolution operation. The underlying idea here is to enable the network to dynamically adapt the kernel based on the input image.
Zhu~\etal <|cite_start|> (Reference: Deformable ConvNets v2: More Deformable, Better Results: The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content. To address this problem, we present a reformulation of Deformable ConvNets that improves its ability to focus on pertinent image regions, through increased modeling power and stronger training. The modeling power is enhanced through a more comprehensive integration of deformable convolution within the network, and by introducing a modulation mechanism that expands the scope of deformation modeling. To effectively harness this enriched modeling capability, we guide network training via a proposed feature mimicking scheme that helps the network to learn features that reflect the object focus and classification power of R-CNN features. With the proposed contributions, this new version of Deformable ConvNets yields significant performance gains over the original model and produces leading results on the COCO benchmark for object detection and instance segmentation.) <|cite_end|>further extended this by adding a dynamic \textit{modulation parameter} which scales the kernel weight value for each sampling location (DCNv2).
By simply using deformable convolutions as a drop-in replacement for standard convolutions, it was shown that the performance of a variety of network architectures could be improved for object detection and segmentation.
Keeping these developments in mind gives rise to a question: Can 3D dilated/deformable convolutions retrace the success story of their 2D counter-parts and deliver improvements for video segmentation tasks? In this paper, we show that the answer is 'yes'. To this end, we propose a novel type of convolution called \OurConvName (\textbf{D}ynamic \textbf{D}ilated \textbf{3D} \textbf{Conv}olutions), which combines elements from dilated and deformable convolutions by dynamically learning a multiplicative scaling factor for the sampling locations of a convolutional kernel.
Additionally, we also a predict a modulation parameter which dynamically scales the kernel weights based on the input features.
We show that \OurConvName out-performs trivial extensions of dilated and deformable convolutions to 3D. Fig.~\ref{fig:conv_comparison} provides an illustrative comparison between: (i) standard 3D convolutions, (ii) a 3D extension of the modulated deformable convolutions proposed by Zhu~\etal <|cite_start|> (Reference: Deformable ConvNets v2: More Deformable, Better Results: The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content. To address this problem, we present a reformulation of Deformable ConvNets that improves its ability to focus on pertinent image regions, through increased modeling power and stronger training. The modeling power is enhanced through a more comprehensive integration of deformable convolution within the network, and by introducing a modulation mechanism that expands the scope of deformation modeling. To effectively harness this enriched modeling capability, we guide network training via a proposed feature mimicking scheme that helps the network to learn features that reflect the object focus and classification power of R-CNN features. With the proposed contributions, this new version of Deformable ConvNets yields significant performance gains over the original model and produces leading results on the COCO benchmark for object detection and instance segmentation.) <|cite_end|>, (iii) our proposed \OurConvName.
In summary, our contributions are as follows:
\begin{itemize}
\item We propose a novel \OurConvName operator which can be used as drop-in replacements for standard convolutions in 3D CNNs to improve their performance on video segmentation tasks.
\item We experimentally justify the efficacy of \OurConvName by applying it to two different 3D CNN based architectures <|cite_start|> (Reference: STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos: Existing methods for instance segmentation in videos typically involve multi-stage pipelines that follow the tracking-by-detection paradigm and model a video clip as a sequence of images. Multiple networks are used to detect objects in individual frames, and then associate these detections over time. Hence, these methods are often non-end-to-end trainable and highly tailored to specific tasks. In this paper, we propose a different approach that is well-suited to a variety of tasks involving instance segmentation in videos. In particular, we model a video clip as a single 3D spatio-temporal volume, and propose a novel approach that segments and tracks instances across space and time in a single stage. Our problem formulation is centered around the idea of spatio-temporal embeddings which are trained to cluster pixels belonging to a specific object instance over an entire video clip. To this end, we introduce (i) novel mixing functions that enhance the feature representation of spatio-temporal embeddings, and (ii) a single-stage, proposal-free network that can reason about temporal context. Our network is trained end-to-end to learn spatio-temporal embeddings as well as parameters required to cluster these embeddings, thus simplifying inference. Our method achieves state-of-the-art results across multiple datasets and tasks. Code and models are available at https://github.com/sabarim/STEm-Seg.) <|cite_end|> <|cite_start|> (Reference: Making a Case for 3D Convolutions for Object Segmentation in Videos: The task of object segmentation in videos is usually accomplished by processing appearance and motion information separately using standard 2D convolutional networks, followed by a learned fusion of the two sources of information. On the other hand, 3D convolutional networks have been successfully applied for video classification tasks, but have not been leveraged as effectively to problems involving dense per-pixel interpretation of videos compared to their 2D convolutional counterparts and lag behind the aforementioned networks in terms of performance. In this work, we show that 3D CNNs can be effectively applied to dense video prediction tasks such as salient object segmentation. We propose a simple yet effective encoder-decoder network architecture consisting entirely of 3D convolutions that can be trained end-to-end using a standard cross-entropy loss. To this end, we leverage an efficient 3D encoder, and propose a 3D decoder architecture, that comprises novel 3D Global Convolution layers and 3D Refinement modules. Our approach outperforms existing state-of-the-arts by a large margin on the DAVIS'16 Unsupervised, FBMS and ViSal dataset benchmarks in addition to being faster, thus showing that our architecture can efficiently learn expressive spatio-temporal features and produce high quality video segmentation masks. We have made our code and trained models publicly available at https://github.com/sabarim/3DC-Seg.) <|cite_end|>and evaluating them on five different benchmarks <|cite_start|> (Reference: A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation: Over the years, datasets and benchmarks have proven their fundamental importance in computer vision research, enabling targeted progress and objective comparisons in many fields. At the same time, legacy datasets may impend the evolution of a field due to saturated algorithm performance and the lack of contemporary, high quality data. In this work we present a new benchmark dataset and evaluation methodology for the area of video object segmentation. The dataset, named DAVIS (Densely Annotated VIdeo Segmentation), consists of fifty high quality, Full HD video sequences, spanning multiple occurrences of common video object segmentation challenges such as occlusions, motionblur and appearance changes. Each video is accompanied by densely annotated, pixel-accurate and per-frame ground truth segmentation. In addition, we provide a comprehensive analysis of several state-of-the-art segmentation approaches using three complementary metrics that measure the spatial extent of the segmentation, the accuracy of the silhouette contours and the temporal coherence. The results uncover strengths and weaknesses of current approaches, opening up promising directions for future works.) <|cite_end|> <|cite_start|> (Reference: The 2019 DAVIS Challenge on VOS: Unsupervised Multi-Object Segmentation: We present the 2019 DAVIS Challenge on Video Object Segmentation, the third edition of the DAVIS Challenge series, a public competition designed for the task of Video Object Segmentation (VOS). In addition to the original semi-supervised track and the interactive track introduced in the previous edition, a new unsupervised multi-object track will be featured this year. In the newly introduced track, participants are asked to provide non-overlapping object proposals on each image, along with an identifier linking them between frames (i.e. video object proposals), without any test-time human supervision (no scribbles or masks provided on the test video). In order to do so, we have re-annotated the train and val sets of DAVIS 2017 in a concise way that facilitates the unsupervised track, and created new test-dev and test-challenge sets for the competition. Definitions, rules, and evaluation metrics for the unsupervised track are described in detail in this paper.) <|cite_end|> <|cite_start|> (Reference: Video Instance Segmentation: In this paper we present a new computer vision task, named video instance segmentation. The goal of this new task is simultaneous detection, segmentation and tracking of instances in videos. In words, it is the first time that the image instance segmentation problem is extended to the video domain. To facilitate research on this new task, we propose a large-scale benchmark called YouTube-VIS, which consists of 2883 high-resolution YouTube videos, a 40-category label set and 131k high-quality instance masks. In addition, we propose a novel algorithm called MaskTrack R-CNN for this task. Our new method introduces a new tracking branch to Mask R-CNN to jointly perform the detection, segmentation and tracking tasks simultaneously. Finally, we evaluate the proposed method and several strong baselines on our new dataset. Experimental results clearly demonstrate the advantages of the proposed algorithm and reveal insight for future improvement. We believe the video instance segmentation task will motivate the community along the line of research for video understanding.) <|cite_end|> <|cite_start|> (Reference: MOTS: Multi-Object Tracking and Segmentation: This paper extends the popular task of multi-object tracking to multi-object tracking and segmentation (MOTS). Towards this goal, we create dense pixel-level annotations for two existing tracking datasets using a semi-automatic annotation procedure. Our new annotations comprise 65,213 pixel masks for 977 distinct objects (cars and pedestrians) in 10,870 video frames. For evaluation, we extend existing multi-object tracking metrics to this new task. Moreover, we propose a new baseline method which jointly addresses detection, tracking, and segmentation with a single convolutional network. We demonstrate the value of our datasets by achieving improvements in performance when training on MOTS annotations. We believe that our datasets, metrics and baseline will become a valuable resource towards developing multi-object tracking approaches that go beyond 2D bounding boxes. We make our annotations, code, and models available at https://www.vision.rwth-aachen.de/page/mots.) <|cite_end|> <|cite_start|> (Reference: Occluded Video Instance Segmentation: A Benchmark: ) <|cite_end|>.
\item We set a new state-of-the-art on the DAVIS 2016 Unsupervised challenge <|cite_start|> (Reference: A Benchmark Dataset and Evaluation Methodology for Video Object Segmentation: Over the years, datasets and benchmarks have proven their fundamental importance in computer vision research, enabling targeted progress and objective comparisons in many fields. At the same time, legacy datasets may impend the evolution of a field due to saturated algorithm performance and the lack of contemporary, high quality data. In this work we present a new benchmark dataset and evaluation methodology for the area of video object segmentation. The dataset, named DAVIS (Densely Annotated VIdeo Segmentation), consists of fifty high quality, Full HD video sequences, spanning multiple occurrences of common video object segmentation challenges such as occlusions, motionblur and appearance changes. Each video is accompanied by densely annotated, pixel-accurate and per-frame ground truth segmentation. In addition, we provide a comprehensive analysis of several state-of-the-art segmentation approaches using three complementary metrics that measure the spatial extent of the segmentation, the accuracy of the silhouette contours and the temporal coherence. The results uncover strengths and weaknesses of current approaches, opening up promising directions for future works.) <|cite_end|>by achieving a $\JnF$ score of 86.0\%.
\end{itemize}
Related Work
\PAR{Image-level Segmentation:}Dense prediction tasks such as segmentation need to predict full resolution output maps and, at the same time, use multi-scale context for effective reasoning. Existing approaches for such tasks <|cite_start|> (Reference: STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos: Existing methods for instance segmentation in videos typically involve multi-stage pipelines that follow the tracking-by-detection paradigm and model a video clip as a sequence of images. Multiple networks are used to detect objects in individual frames, and then associate these detections over time. Hence, these methods are often non-end-to-end trainable and highly tailored to specific tasks. In this paper, we propose a different approach that is well-suited to a variety of tasks involving instance segmentation in videos. In particular, we model a video clip as a single 3D spatio-temporal volume, and propose a novel approach that segments and tracks instances across space and time in a single stage. Our problem formulation is centered around the idea of spatio-temporal embeddings which are trained to cluster pixels belonging to a specific object instance over an entire video clip. To this end, we introduce (i) novel mixing functions that enhance the feature representation of spatio-temporal embeddings, and (ii) a single-stage, proposal-free network that can reason about temporal context. Our network is trained end-to-end to learn spatio-temporal embeddings as well as parameters required to cluster these embeddings, thus simplifying inference. Our method achieves state-of-the-art results across multiple datasets and tasks. Code and models are available at https://github.com/sabarim/STEm-Seg.) <|cite_end|> <|cite_start|> (Reference: DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs: In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.) <|cite_end|> <|cite_start|> (Reference: Rethinking Atrous Convolution for Semantic Image Segmentation: In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter's field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed `DeepLabv3' system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.) <|cite_end|> <|cite_start|> (Reference: Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation: Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0\% and 82.1\% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at \url{https://github.com/tensorflow/models/tree/master/research/deeplab}.) <|cite_end|> <|cite_start|> (Reference: Mask R-CNN: We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code has been made available at: https://github.com/facebookresearch/Detectron) <|cite_end|> <|cite_start|> (Reference: Multi-Scale Context Aggregation by Dilated Convolutions: State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically designed for dense prediction. The presented module uses dilated convolutions to systematically aggregate multi-scale contextual information without losing resolution. The architecture is based on the fact that dilated convolutions support exponential expansion of the receptive field without loss of resolution or coverage. We show that the presented context module increases the accuracy of state-of-the-art semantic segmentation systems. In addition, we examine the adaptation of image classification networks to dense prediction and show that simplifying the adapted network can increase accuracy.) <|cite_end|>utilize dilated convolutions for this purpose, which dilate the convolutional kernel by a fixed factor to increase the receptive field, thus mitigating the need for down-sampling the image features.
Atrous Spatial Pyramid Pooling (ASPP) <|cite_start|> (Reference: DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs: In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed "DeepLab" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7% mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online.) <|cite_end|>goes a step further by using multiple dilation rates on the same feature map to capture a multi-scale feature representation, and has been successfully used for both instance and semantic segmentation tasks <|cite_start|> (Reference: STEm-Seg: Spatio-temporal Embeddings for Instance Segmentation in Videos: Existing methods for instance segmentation in videos typically involve multi-stage pipelines that follow the tracking-by-detection paradigm and model a video clip as a sequence of images. Multiple networks are used to detect objects in individual frames, and then associate these detections over time. Hence, these methods are often non-end-to-end trainable and highly tailored to specific tasks. In this paper, we propose a different approach that is well-suited to a variety of tasks involving instance segmentation in videos. In particular, we model a video clip as a single 3D spatio-temporal volume, and propose a novel approach that segments and tracks instances across space and time in a single stage. Our problem formulation is centered around the idea of spatio-temporal embeddings which are trained to cluster pixels belonging to a specific object instance over an entire video clip. To this end, we introduce (i) novel mixing functions that enhance the feature representation of spatio-temporal embeddings, and (ii) a single-stage, proposal-free network that can reason about temporal context. Our network is trained end-to-end to learn spatio-temporal embeddings as well as parameters required to cluster these embeddings, thus simplifying inference. Our method achieves state-of-the-art results across multiple datasets and tasks. Code and models are available at https://github.com/sabarim/STEm-Seg.) <|cite_end|> <|cite_start|> (Reference: Mask R-CNN: We present a conceptually simple, flexible, and general framework for object instance segmentation. Our approach efficiently detects objects in an image while simultaneously generating a high-quality segmentation mask for each instance. The method, called Mask R-CNN, extends Faster R-CNN by adding a branch for predicting an object mask in parallel with the existing branch for bounding box recognition. Mask R-CNN is simple to train and adds only a small overhead to Faster R-CNN, running at 5 fps. Moreover, Mask R-CNN is easy to generalize to other tasks, e.g., allowing us to estimate human poses in the same framework. We show top results in all three tracks of the COCO suite of challenges, including instance segmentation, bounding-box object detection, and person keypoint detection. Without bells and whistles, Mask R-CNN outperforms all existing, single-model entries on every task, including the COCO 2016 challenge winners. We hope our simple and effective approach will serve as a solid baseline and help ease future research in instance-level recognition. Code has been made available at: https://github.com/facebookresearch/Detectron) <|cite_end|>.
Although dilated convolutions and ASPP help capture objects of different sizes, the convolutional kernels have fixed geometric structures since the dilation rates are constant.
Several existing works <|cite_start|> (Reference: Deformable Convolutional Networks: Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the effectiveness of our approach on sophisticated vision tasks of object detection and semantic segmentation. The code would be released.) <|cite_end|> <|cite_start|> (Reference: Learning Depth-Guided Convolutions for Monocular 3D Object Detection: 3D object detection from a single image without LiDAR is a challenging task due to the lack of accurate depth information. Conventional 2D convolutions are unsuitable for this task because they fail to capture local object and its scale information, which are vital for 3D object detection. To better represent 3D structure, prior arts typically transform depth maps estimated from 2D images into a pseudo-LiDAR representation, and then apply existing 3D point-cloud based object detectors. However, their results depend heavily on the accuracy of the estimated depth maps, resulting in suboptimal performance. In this work, instead of using pseudo-LiDAR representation, we improve the fundamental 2D fully convolutions by proposing a new local convolutional network (LCN), termed Depth-guided Dynamic-Depthwise-Dilated LCN (D$^4$LCN), where the filters and their receptive fields can be automatically learned from image-based depth maps, making different pixels of different images have different filters. D$^4$LCN overcomes the limitation of conventional 2D convolutions and narrows the gap between image representation and 3D point cloud representation. Extensive experiments show that D$^4$LCN outperforms existing works by large margins. For example, the relative improvement of D$^4$LCN against the state-of-the-art on KITTI is 9.1\% in the moderate setting. The code is available at https://github.com/dingmyu/D4LCN.) <|cite_end|> <|cite_start|> (Reference: Deformable ConvNets v2: More Deformable, Better Results: The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content. To address this problem, we present a reformulation of Deformable ConvNets that improves its ability to focus on pertinent image regions, through increased modeling power and stronger training. The modeling power is enhanced through a more comprehensive integration of deformable convolution within the network, and by introducing a modulation mechanism that expands the scope of deformation modeling. To effectively harness this enriched modeling capability, we guide network training via a proposed feature mimicking scheme that helps the network to learn features that reflect the object focus and classification power of R-CNN features. With the proposed contributions, this new version of Deformable ConvNets yields significant performance gains over the original model and produces leading results on the COCO benchmark for object detection and instance segmentation.) <|cite_end|>attempt to adapt these kernels by learning offsets or other transformation parameters from the image features. Spatial Transformer Networks (STN) <|cite_start|> (Reference: Spatial Transformer Networks: Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.) <|cite_end|>learn deformations of the sampling grid, for a regular convolution operation, from the input feature map and warp the sampling grid based on the learnt deformation parameters.
Deformable Convolutional Network (DCNv1) <|cite_start|> (Reference: Deformable Convolutional Networks: Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the effectiveness of our approach on sophisticated vision tasks of object detection and semantic segmentation. The code would be released.) <|cite_end|>on the other hand apply learned offsets to the sampling locations of a regular convolution, thereby enhancing its capability of capturing non-rigid transformations. DCNv1 can adapt to varying object sizes and scene geometry, and is shown to be effective for image-level tasks such as object detection and segmentation <|cite_start|> (Reference: Deformable Convolutional Networks: Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the effectiveness of our approach on sophisticated vision tasks of object detection and semantic segmentation. The code would be released.) <|cite_end|>. Nevertheless, the sampling locations learned by DCNv1 often spread beyond the region of interest, which can lead to unnecessary feature influences. To overcome this issue, Zhu~\etal introduced DCNv2 <|cite_start|> (Reference: Deformable ConvNets v2: More Deformable, Better Results: The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content. To address this problem, we present a reformulation of Deformable ConvNets that improves its ability to focus on pertinent image regions, through increased modeling power and stronger training. The modeling power is enhanced through a more comprehensive integration of deformable convolution within the network, and by introducing a modulation mechanism that expands the scope of deformation modeling. To effectively harness this enriched modeling capability, we guide network training via a proposed feature mimicking scheme that helps the network to learn features that reflect the object focus and classification power of R-CNN features. With the proposed contributions, this new version of Deformable ConvNets yields significant performance gains over the original model and produces leading results on the COCO benchmark for object detection and instance segmentation.) <|cite_end|>where, in addition to the offsets, a dynamic modulation parameter is learned which scales the kernel weights. This parameter gives the convolution kernels additional freedom to adjust the influence of the sampled regions. A teacher network based on R-CNN <|cite_start|> (Reference: Rich feature hierarchies for accurate object detection and semantic segmentation: Object detection performance, as measured on the canonical PASCAL VOC dataset, has plateaued in the last few years. The best-performing methods are complex ensemble systems that typically combine multiple low-level image features with high-level context. In this paper, we propose a simple and scalable detection algorithm that improves mean average precision (mAP) by more than 30% relative to the previous best result on VOC 2012---achieving a mAP of 53.3%. Our approach combines two key insights: (1) one can apply high-capacity convolutional neural networks (CNNs) to bottom-up region proposals in order to localize and segment objects and (2) when labeled training data is scarce, supervised pre-training for an auxiliary task, followed by domain-specific fine-tuning, yields a significant performance boost. Since we combine region proposals with CNNs, we call our method R-CNN: Regions with CNN features. We also compare R-CNN to OverFeat, a recently proposed sliding-window detector based on a similar CNN architecture. We find that R-CNN outperforms OverFeat by a large margin on the 200-class ILSVRC2013 detection dataset. Source code for the complete system is available at http://www.cs.berkeley.edu/~rbg/rcnn.) <|cite_end|>is then used to train this modulation mechanism, where the teacher provides additional guidance to learn a more focused feature representation.
\OurConvName, similar to deformable convolutions <|cite_start|> (Reference: Deformable Convolutional Networks: Convolutional neural networks (CNNs) are inherently limited to model geometric transformations due to the fixed geometric structures in its building modules. In this work, we introduce two new modules to enhance the transformation modeling capacity of CNNs, namely, deformable convolution and deformable RoI pooling. Both are based on the idea of augmenting the spatial sampling locations in the modules with additional offsets and learning the offsets from target tasks, without additional supervision. The new modules can readily replace their plain counterparts in existing CNNs and can be easily trained end-to-end by standard back-propagation, giving rise to deformable convolutional networks. Extensive experiments validate the effectiveness of our approach on sophisticated vision tasks of object detection and semantic segmentation. The code would be released.) <|cite_end|> <|cite_start|> (Reference: Deformable ConvNets v2: More Deformable, Better Results: The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content. To address this problem, we present a reformulation of Deformable ConvNets that improves its ability to focus on pertinent image regions, through increased modeling power and stronger training. The modeling power is enhanced through a more comprehensive integration of deformable convolution within the network, and by introducing a modulation mechanism that expands the scope of deformation modeling. To effectively harness this enriched modeling capability, we guide network training via a proposed feature mimicking scheme that helps the network to learn features that reflect the object focus and classification power of R-CNN features. With the proposed contributions, this new version of Deformable ConvNets yields significant performance gains over the original model and produces leading results on the COCO benchmark for object detection and instance segmentation.) <|cite_end|>, can be directly plugged-in to any existing architecture and improve its performance. However, unlike deformable convolutions, \OurConvName works with 3D models and can be used effectively for segmentation tasks in videos. In addition, the modulation mechanism used in \OurConvName does not need additional supervision from a teacher network as in DCNv2 <|cite_start|> (Reference: Deformable ConvNets v2: More Deformable, Better Results: The superior performance of Deformable Convolutional Networks arises from its ability to adapt to the geometric variations of objects. Through an examination of its adaptive behavior, we observe that while the spatial support for its neural features conforms more closely than regular ConvNets to object structure, this support may nevertheless extend well beyond the region of interest, causing features to be influenced by irrelevant image content. To address this problem, we present a reformulation of Deformable ConvNets that improves its ability to focus on pertinent image regions, through increased modeling power and stronger training. The modeling power is enhanced through a more comprehensive integration of deformable convolution within the network, and by introducing a modulation mechanism that expands the scope of deformation modeling. To effectively harness this enriched modeling capability, we guide network training via a proposed feature mimicking scheme that helps the network to learn features that reflect the object focus and classification power of R-CNN features. With the proposed contributions, this new version of Deformable ConvNets yields significant performance gains over the original model and produces leading results on the COCO benchmark for object detection and instance segmentation.) <|cite_end|>.
\PAR{Video Processing using 3D Convolutions:}Videos can be interpreted as 3D data with the third dimension being time. To leverage temporal context effectively, video classification networks <|cite_start|> (Reference: Ieee Transactions on Pattern Analysis and Machine Intelligence 1 3d Convolutional Neural Networks for Human Action Recognition: —We consider the automated recognition of human actions in surveillance videos. Most current methods build classifiers based on complex handcrafted features computed from the raw inputs. Convolutional neural networks (CNNs) are a type of deep models that can act directly on the raw inputs. However, such models are currently limited to handle 2D inputs. In this paper, we develop a novel 3D CNN model for action recognition. This model extracts features from both the spatial and the temporal dimensions by performing 3D convolutions, thereby capturing the motion information encoded in multiple adjacent frames. The developed model generates multiple channels of information from the input frames, and the final feature representation combines information from all channels. To further boost the performance of 3D CNN models, we propose to regularize the models with high-level features and combine the outputs of a variety of different models. We apply the developed models to recognize human actions in the real-world environment of airport surveillance videos, and it achieves superior performance in comparison to baseline methods.) <|cite_end|> <|cite_start|> (Reference: {Large-Scale Video Classification with Convolutional Neural Networks: Convolutional Neural Networks (CNNs) have been established as a powerful class of models for image recognition problems. Encouraged by these results, we provide an extensive empirical evaluation of CNNs on large-scale video classification using a new dataset of 1 million YouTube videos belonging to 487 classes. We study multiple approaches for extending the connectivity of a CNN in time domain to take advantage of local spatio-temporal information and suggest a multiresolution, foveated architecture as a promising way of speeding up the training. Our best spatio-temporal networks display significant performance improvements compared to strong feature-based baselines (55.3% to 63.9%), but only a surprisingly modest improvement compared to single-frame models (59.3% to 60.9%). We further study the generalization performance of our best model by retraining the top layers on the UCF-101 Action Recognition dataset and observe significant performance improvements compared to the UCF-101 baseline model (63.3% up from 43.9%).) <|cite_end|> <|cite_start|> (Reference: Learning Spatiotemporal Features with 3D Convolutional Networks: We propose a simple, yet effective approach for spatiotemporal feature learning using deep 3-dimensional convolutional networks (3D ConvNets) trained on a large scale supervised video dataset. Our findings are three-fold: 1) 3D ConvNets are more suitable for spatiotemporal feature learning compared to 2D ConvNets; 2) A homogeneous architecture with small 3x3x3 convolution kernels in all layers is among the best performing architectures for 3D ConvNets; and 3) Our learned features, namely C3D (Convolutional 3D), with a simple linear classifier outperform state-of-the-art methods on 4 different benchmarks and are comparable with current best methods on the other 2 benchmarks. In addition, the features are compact: achieving 52.8% accuracy on UCF101 dataset with only 10 dimensions and also very efficient to compute due to the fast inference of ConvNets. Finally, they are conceptually very simple and easy to train and use.) <|cite_end|> <|cite_start|> (Reference: Long-term Temporal Convolutions for Action Recognition: Typical human actions last several seconds and exhibit characteristic spatio-temporal structure. Recent methods attempt to capture this structure and learn action representations with convolutional neural networks. Such representations, however, are typically learned at the level of a few video frames failing to model actions at their full temporal extent. In this work we learn video representations using neural networks with long-term temporal convolutions (LTC). We demonstrate that LTC-CNN models with increased temporal extents improve the accuracy of action recognition. We also study the impact of different low-level representations, such as raw values of video pixels and optical flow vector fields and demonstrate the importance of high-quality optical flow estimation for learning accurate action models. We report state-of-the-art results on two challenging benchmarks for human action recognition UCF101 (92.7%) and HMDB51 (67.2%).) <|cite_end|>successfully use 3D-CNNs and show their superior performance. However, unlike segmentation tasks, these networks do not need large resolution feature maps, and hence the increase in computational overhead caused by 3D-CNNs is still manageable. Recent works in the field of Unsupervised Video Object Segmentation <|cite_start|> (Reference: E3D: An efficient 3D CNN for the recognition of dairy cow's basic motion behavior: ) <|cite_end|> <|cite_start|> (Reference: Making a Case for 3D Convolutions for Object Segmentation in Videos: The task of object segmentation in videos is usually accomplished by processing appearance and motion information separately using standard 2D convolutional networks, followed by a learned fusion of the two sources of information. On the other hand, 3D convolutional networks have been successfully applied for video classification tasks, but have not been leveraged as effectively to problems involving dense per-pixel interpretation of videos compared to their 2D convolutional counterparts and lag behind the aforementioned networks in terms of performance. In this work, we show that 3D CNNs can be effectively applied to dense video prediction tasks such as salient object segmentation. We propose a simple yet effective encoder-decoder network architecture consisting entirely of 3D convolutions that can be trained end-to-end using a standard cross-entropy loss. To this end, we leverage an efficient 3D encoder, and propose a 3D decoder architecture, that comprises novel 3D Global Convolution layers and 3D Refinement modules. Our approach outperforms existing state-of-the-arts by a large margin on the DAVIS'16 Unsupervised, FBMS and ViSal dataset benchmarks in addition to being faster, thus showing that our architecture can efficiently learn expressive spatio-temporal features and produce high quality video segmentation masks. We have made our code and trained models publicly available at https://github.com/sabarim/3DC-Seg.) <|cite_end|>, which target \textit{foreground-background} segmentation, have also adapted 3D-CNNs to improve the segmentation performance. Hou~\etal <|cite_start|> (Reference: E3D: An efficient 3D CNN for the recognition of dairy cow's basic motion behavior: ) <|cite_end|>uses an encoder-decoder architecture based on a variant of 3D-CNNs called R2plus1D <|cite_start|> (Reference: A Closer Look at Spatiotemporal Convolutions for Action Recognition: In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly advantages in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block "R(2+1)D" which gives rise to CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101 and HMDB51.) <|cite_end|>, and insert an ASPP after the last layer of the encoder.
However, they adopt a relatively shallow network to compensate for the additional computation needed by ASPP, which in turn affects the final performance. Mahadevan~\etal <|cite_start|> (Reference: Making a Case for 3D Convolutions for Object Segmentation in Videos: The task of object segmentation in videos is usually accomplished by processing appearance and motion information separately using standard 2D convolutional networks, followed by a learned fusion of the two sources of information. On the other hand, 3D convolutional networks have been successfully applied for video classification tasks, but have not been leveraged as effectively to problems involving dense per-pixel interpretation of videos compared to their 2D convolutional counterparts and lag behind the aforementioned networks in terms of performance. In this work, we show that 3D CNNs can be effectively applied to dense video prediction tasks such as salient object segmentation. We propose a simple yet effective encoder-decoder network architecture consisting entirely of 3D convolutions that can be trained end-to-end using a standard cross-entropy loss. To this end, we leverage an efficient 3D encoder, and propose a 3D decoder architecture, that comprises novel 3D Global Convolution layers and 3D Refinement modules. Our approach outperforms existing state-of-the-arts by a large margin on the DAVIS'16 Unsupervised, FBMS and ViSal dataset benchmarks in addition to being faster, thus showing that our architecture can efficiently learn expressive spatio-temporal features and produce high quality video segmentation masks. We have made our code and trained models publicly available at https://github.com/sabarim/3DC-Seg.) <|cite_end|>on the other hand employ a much deeper channel-separated 3D-CNN <|cite_start|> (Reference: Video Classification with Channel-Separated Convolutional Networks: Group convolution has been shown to offer great computational savings in various 2D convolutional architectures for image classification. It is natural to ask: 1) if group convolution can help to alleviate the high computational cost of video classification networks; 2) what factors matter the most in 3D group convolutional networks; and 3) what are good computation/accuracy trade-offs with 3D group convolutional networks. This paper studies the effects of different design choices in 3D group convolutional networks for video classification. We empirically demonstrate that the amount of channel interactions plays an important role in the accuracy of 3D group convolutional networks. Our experiments suggest two main findings. First, it is a good practice to factorize 3D convolutions by separating channel interactions and spatiotemporal interactions as this leads to improved accuracy and lower computational cost. Second, 3D channel-separated convolutions provide a form of regularization, yielding lower training accuracy but higher test accuracy compared to 3D convolutions. These two empirical findings lead us to design an architecture -- Channel-Separated Convolutional Network (CSN) -- which is simple, efficient, yet accurate. On Sports1M, Kinetics, and Something-Something, our CSNs are comparable with or better than the state-of-the-art while being 2-3 times more efficient.) <|cite_end|>as backbone with much fewer parameters in combination with their novel 3D Global Convolutions and 3D Refinement modules in the decoder, and achieve state-of-the-art results. In this paper, we show that augmenting <|cite_start|> (Reference: Making a Case for 3D Convolutions for Object Segmentation in Videos: The task of object segmentation in videos is usually accomplished by processing appearance and motion information separately using standard 2D convolutional networks, followed by a learned fusion of the two sources of information. On the other hand, 3D convolutional networks have been successfully applied for video classification tasks, but have not been leveraged as effectively to problems involving dense per-pixel interpretation of videos compared to their 2D convolutional counterparts and lag behind the aforementioned networks in terms of performance. In this work, we show that 3D CNNs can be effectively applied to dense video prediction tasks such as salient object segmentation. We propose a simple yet effective encoder-decoder network architecture consisting entirely of 3D convolutions that can be trained end-to-end using a standard cross-entropy loss. To this end, we leverage an efficient 3D encoder, and propose a 3D decoder architecture, that comprises novel 3D Global Convolution layers and 3D Refinement modules. Our approach outperforms existing state-of-the-arts by a large margin on the DAVIS'16 Unsupervised, FBMS and ViSal dataset benchmarks in addition to being faster, thus showing that our architecture can efficiently learn expressive spatio-temporal features and produce high quality video segmentation masks. We have made our code and trained models publicly available at https://github.com/sabarim/3DC-Seg.) <|cite_end|>with \OurConvName further improves the network performance even with significantly less training data.
\PAR{Instance Segmentation in Videos:}Multi-instance Segmentation in Videos has recently emerged as a popular field due to its applicability in autonomous driving and robotics. Some of the popular tasks in this domain are Video Object Segmentation (VOS) | [
"<|reference_start|> Anchor Diffusion for Unsupervised Video Object Segmentation: Unsupervised video object segmentation has often been tackled by methods based on recurrent neural networks and optical flow. Despite their complexity, these kinds of approaches tend to favour short-term temporal dependencies and are thus prone to accumulating inaccuracies, which cause drift over time. Moreover, simple (static) image segmentation models, alone, can perform competitively against these methods, which further suggests that the way temporal dependencies are modelled should be reconsidered. Motivated by these observations, in this paper we explore simple yet effective strategies to model long-term temporal dependencies. Inspired by the non-local operators of [70], we introduce a technique to establish dense correspondences between pixel embeddings of a reference \"anchor\" frame and the current one. This allows the learning of pairwise dependencies at arbitrarily long distances without conditioning on intermediate frames. Without online supervision, our approach can suppress the background and precisely segment the foreground object even in challenging scenarios, while maintaining consistent performance over time. With a mean IoU of $81.7\\%$, our method ranks first on the DAVIS-2016 leaderboard of unsupervised methods, while still being competitive against state-of-the-art online semi-supervised approaches. We further evaluate our method on the FBMS dataset and the ViSal video saliency dataset, showing results competitive with the state of the art. <|reference_end|>",
"<|reference_start|> Making a Case for 3D Convolutions for Object Segmentation in Videos: The task of object segmentation in videos is usually accomplished by processing appearance and motion information separately using standard 2D convolutional networks, followed by a learned fusion of the two sources of information. On the other hand, 3D convolutional networks have been successfully applied for video classification tasks, but have not been leveraged as effectively to problems involving dense per-pixel interpretation of videos compared to their 2D convolutional counterparts and lag behind the aforementioned networks in terms of performance. In this work, we show that 3D CNNs can be effectively applied to dense video prediction tasks such as salient object segmentation. We propose a simple yet effective encoder-decoder network architecture consisting entirely of 3D convolutions that can be trained end-to-end using a standard cross-entropy loss. To this end, we leverage an efficient 3D encoder, and propose a 3D decoder architecture, that comprises novel 3D Global Convolution layers and 3D Refinement modules. Our approach outperforms existing state-of-the-arts by a large margin on the DAVIS'16 Unsupervised, FBMS and ViSal dataset benchmarks in addition to being faster, thus showing that our architecture can efficiently learn expressive spatio-temporal features and produce high quality video segmentation masks. We have made our code and trained models publicly available at https://github.com/sabarim/3DC-Seg. <|reference_end|>",
"<|reference_start|> A Closer Look at Spatiotemporal Convolutions for Action Recognition: In this paper we discuss several forms of spatiotemporal convolutions for video analysis and study their effects on action recognition. Our motivation stems from the observation that 2D CNNs applied to individual frames of the video have remained solid performers in action recognition. In this work we empirically demonstrate the accuracy advantages of 3D CNNs over 2D CNNs within the framework of residual learning. Furthermore, we show that factorizing the 3D convolutional filters into separate spatial and temporal components yields significantly advantages in accuracy. Our empirical study leads to the design of a new spatiotemporal convolutional block \"R(2+1)D\" which gives rise to CNNs that achieve results comparable or superior to the state-of-the-art on Sports-1M, Kinetics, UCF101 and HMDB51. <|reference_end|>",
"<|reference_start|> Making a Case for 3D Convolutions for Object Segmentation in Videos: The task of object segmentation in videos is usually accomplished by processing appearance and motion information separately using standard 2D convolutional networks, followed by a learned fusion of the two sources of information. On the other hand, 3D convolutional networks have been successfully applied for video classification tasks, but have not been leveraged as effectively to problems involving dense per-pixel interpretation of videos compared to their 2D convolutional counterparts and lag behind the aforementioned networks in terms of performance. In this work, we show that 3D CNNs can be effectively applied to dense video prediction tasks such as salient object segmentation. We propose a simple yet effective encoder-decoder network architecture consisting entirely of 3D convolutions that can be trained end-to-end using a standard cross-entropy loss. To this end, we leverage an efficient 3D encoder, and propose a 3D decoder architecture, that comprises novel 3D Global Convolution layers and 3D Refinement modules. Our approach outperforms existing state-of-the-arts by a large margin on the DAVIS'16 Unsupervised, FBMS and ViSal dataset benchmarks in addition to being faster, thus showing that our architecture can efficiently learn expressive spatio-temporal features and produce high quality video segmentation masks. We have made our code and trained models publicly available at https://github.com/sabarim/3DC-Seg. <|reference_end|>"
] | [
7,
20,
54,
55
] | {"<|multi_cite_1_1|>": "arxiv-167006", "<|multi_cite_1_2|>": "ss-1087960", "<|multi_cite_1_3|>": "arxiv-238611", "<|multi_cite_2_1|>": "arxiv-94816", "<|multi_cite_2_2|>": "arxiv-119553", "<|multi_cite_3_1|>": "arxiv-167006", "<|multi_cite_3_2|>": "ss-1087960", "<|multi_cite_3_3|>": "arxiv-230495", "<|multi_cite_3_4|>": "ss-963656", "<|multi_cite_4_1|>": "ss-1087961", "<|multi_cite_4_2|>": "arxiv-254417", "<|multi_cite_4_3|>": "arxiv-286595", "<|cite_5|>": "arxiv-182102", "<|multi_cite_6_1|>": "arxiv-99247", "<|multi_cite_6_2|>": "arxiv-127025", "<|multi_cite_6_3|>": "arxiv-147571", "<|cite_7|>": "arxiv-119391", "<|cite_8|>": "arxiv-182102", "<|cite_9|>": "arxiv-182102", "<|multi_cite_10_1|>": "arxiv-254417", "<|multi_cite_10_2|>": "arxiv-286595", "<|multi_cite_11_1|>": "ss-1083246", "<|multi_cite_11_2|>": "arxiv-202456", "<|multi_cite_11_3|>": "arxiv-203844", "<|multi_cite_11_4|>": "arxiv-190981", "<|multi_cite_11_5|>": "ss-782258", "<|cite_12|>": "ss-1083246", "<|multi_cite_13_1|>": "arxiv-254417", "<|multi_cite_13_2|>": "arxiv-99247", "<|multi_cite_13_3|>": "arxiv-127025", "<|multi_cite_13_4|>": "arxiv-147571", "<|multi_cite_13_5|>": "arxiv-119553", "<|multi_cite_13_6|>": "arxiv-87830", "<|cite_14|>": "arxiv-99247", "<|multi_cite_15_1|>": "arxiv-254417", "<|multi_cite_15_2|>": "arxiv-119553", "<|multi_cite_16_1|>": "arxiv-119391", "<|multi_cite_16_2|>": "arxiv-238677", "<|multi_cite_16_3|>": "arxiv-182102", "<|cite_17|>": "arxiv-78899", "<|cite_18|>": "arxiv-119391", "<|cite_19|>": "arxiv-119391", "<|cite_20|>": "arxiv-182102", "<|cite_21|>": "arxiv-52559", "<|multi_cite_22_1|>": "arxiv-119391", "<|multi_cite_22_2|>": "arxiv-182102", "<|cite_23|>": "arxiv-182102", "<|multi_cite_24_1|>": "ss-778508", "<|multi_cite_24_2|>": "ss-974412", "<|multi_cite_24_3|>": "arxiv-69562", "<|multi_cite_24_4|>": "arxiv-96029", "<|multi_cite_25_1|>": "ss-1087961", "<|multi_cite_25_2|>": "arxiv-286595", "<|cite_26|>": "ss-1087961", "<|cite_27|>": "arxiv-141756", "<|cite_28|>": "arxiv-286595", "<|cite_29|>": "arxiv-198375", "<|cite_30|>": "arxiv-286595", "<|multi_cite_31_1|>": "arxiv-202456", "<|multi_cite_31_2|>": "ss-1083246", "<|cite_32|>": "arxiv-203844", "<|cite_33|>": "ss-782258", "<|cite_34|>": "arxiv-190981", "<|multi_cite_35_1|>": "arxiv-203844", "<|multi_cite_35_2|>": "arxiv-190981", "<|multi_cite_35_3|>": "ss-1087960", "<|multi_cite_35_4|>": "arxiv-225907", "<|multi_cite_35_5|>": "arxiv-167006", "<|multi_cite_36_1|>": "ss-1087960", "<|multi_cite_36_2|>": "arxiv-167006", "<|cite_37|>": "arxiv-238611", "<|cite_38|>": "arxiv-119553", "<|cite_39|>": "arxiv-151633", "<|cite_40|>": "arxiv-254417"} |
2408.11451 | <|paper_start|> Title: Bidirectional Gated Mamba for Sequential Recommendation
Abstract: Bidirectional Gated Mamba for Sequential Recommendation: In various domains, Sequential Recommender Systems (SRS) have become essential due to their superior capability to discern intricate user preferences. Typically, SRS utilize transformer-based architectures to forecast the subsequent item within a sequence. Nevertheless, the quadratic computational complexity inherent in these models often leads to inefficiencies, hindering the achievement of real-time recommendations. Mamba, a recent advancement, has exhibited exceptional performance in time series prediction, significantly enhancing both efficiency and accuracy. However, integrating Mamba directly into SRS poses several challenges. Its inherently unidirectional nature may constrain the model's capacity to capture the full context of user-item interactions, while its instability in state estimation can compromise its ability to detect short-term patterns within interaction sequences. To overcome these issues, we introduce a new framework named Selective Gated Mamba (SIGMA) for Sequential Recommendation. This framework leverages a Partially Flipped Mamba (PF-Mamba) to construct a bidirectional architecture specifically tailored to improve contextual modeling. Additionally, an input-sensitive Dense Selective Gate (DS Gate) is employed to optimize directional weights and enhance the processing of sequential information in PF-Mamba. For short sequence modeling, we have also developed a Feature Extract GRU (FE-GRU) to efficiently capture short-term dependencies. Empirical results indicate that SIGMA outperforms current models on five real-world datasets. Our implementation code is available at https://github.com/ziwliu-cityu/SIMGA to ease reproducibility.
Introduction
\label{sec:intro}
Over the past decade, sequential recommender systems (SRS) have demonstrated promising potential across various domains, including content streaming platforms <|cite_start|> (Reference: Recommendation on Live-Streaming Platforms: Dynamic Availability and Repeat Consumption: Live-streaming platforms broadcast user-generated video in real-time. Recommendation on these platforms shares similarities with traditional settings, such as a large volume of heterogeneous content and highly skewed interaction distributions. However, several challenges must be overcome to adapt recommendation algorithms to live-streaming platforms: first, content availability is dynamic which restricts users to choose from only a subset of items at any given time; during training and inference we must carefully handle this factor in order to properly account for such signals, where ‘non-interactions’ reflect availability as much as implicit preference. Streamers are also fundamentally different from ‘items’ in traditional settings: repeat consumption of specific channels plays a significant role, though the content itself is fundamentally ephemeral. In this work, we study recommendation in this setting of a dynamically evolving set of available items. We propose LiveRec, a self-attentive model that personalizes item ranking based on both historical interactions and current availability. We also show that carefully modelling repeat consumption plays a significant role in model performance. To validate our approach, and to inspire further research on this setting, we release a dataset containing 475M user interactions on Twitch over a 43-day period. We evaluate our approach on a recommendation task and show our method to outperform various strong baselines in ranking the currently available content.) <|cite_end|> and e-commerce <|cite_start|> (Reference: Time to shop for valentine's day: Shopping occasions and sequential recommendation in e-commerce: Currently, most sequence-based recommendation models aim to predict a user's next actions (e.g. next purchase) based on their past actions. These models either capture users' intrinsic preference (e.g. a comedy lover, or a fan of fantasy) from their long-term behavior patterns or infer their current needs by emphasizing recent actions. However, in e-commerce, intrinsic user behavior may be shifted by occasions such as birthdays, anniversaries, or gifting celebrations (Valentine's Day or Mother's Day), leading to purchases that deviate from long-term preferences and are not related to recent actions. In this work, we propose a novel next-item recommendation system which models a user's default, intrinsic preference, as well as two different kinds of occasion-based signals that may cause users to deviate from their normal behavior. More specifically, this model is novel in that it: (1) captures a personal occasion signal using an attention layer that models reoccurring occasions specific to that user (e.g. a birthday); (2) captures a global occasion signal using an attention layer that models seasonal or reoccurring occasions for many users (e.g. Christmas); (3) balances the user's intrinsic preferences with the personal and global occasion signals for different users at different timestamps with a gating layer. We explore two real-world e-commerce datasets (Amazon and Etsy) and show that the proposed model outperforms state-of-the-art models by 7.62% and 6.06% in predicting users' next purchase.) <|cite_end|>. To harness this potential and meet the demand for accurate next-item predictions <|cite_start|> (Reference: Deep Learning for Sequential Recommendation: Algorithms, Influential Factors, and Evaluations: In the field of sequential recommendation, deep learning (DL)-based methods have received a lot of attention in the past few years and surpassed traditional models such as Markov chain-based and factorization-based ones. However, there is little systematic study on DL-based methods, especially regarding to how to design an effective DL model for sequential recommendation. In this view, this survey focuses on DL-based sequential recommender systems by taking the aforementioned issues into consideration. Specifically,we illustrate the concept of sequential recommendation, propose a categorization of existing algorithms in terms of three types of behavioral sequence, summarize the key factors affecting the performance of DL-based models, and conduct corresponding evaluations to demonstrate the effects of these factors. We conclude this survey by systematically outlining future directions and challenges in this field.) <|cite_end|>, an increasing number of researchers are focusing on refining existing architectures and proposing novel approaches~\cite {35,39}.
\begin{figure}[!t]
\centering
\includegraphics[width = 0.95\linewidth]{FigureMaking/amazon_beauty_groupby.pdf}
\caption{The illustration for long-tail user problem.}
\label{fig:intro}
\vspace{-3mm}
\end{figure}
Recently, Transformer-based models have emerged as the leading approach in sequential recommendation due to their outstanding performance <|cite_start|> (Reference: Transformers4Rec: Bridging the Gap between NLP and Sequential/Session-Based Recommendation: Much of the recent progress in sequential and session-based recommendation has been driven by improvements in model architecture and pretraining techniques originating in the field of Natural Language Processing. Transformer architectures in particular have facilitated building higher-capacity models and provided data augmentation and training techniques which demonstrably improve the effectiveness of sequential recommendation. But with a thousandfold more research going on in NLP, the application of transformers for recommendation understandably lags behind. To remedy this we introduce Transformers4Rec, an open-source library built upon HuggingFace’s Transformers library with a similar goal of opening up the advances of NLP based Transformers to the recommender system community and making these advancements immediately accessible for the tasks of sequential and session-based recommendation. Like its core dependency, Transformers4Rec is designed to be extensible by researchers, simple for practitioners, and fast and robust in industrial deployments. In order to demonstrate the usefulness of the library and the applicability of Transformer architectures in next-click prediction for user sessions, where sequence lengths are much shorter than those commonly found in NLP, we have leveraged Transformers4Rec to win two recent session-based recommendation competitions. In addition, we present in this paper the first comprehensive empirical analysis comparing many Transformer architectures and training approaches for the task of session-based recommendation. We demonstrate that the best Transformer architectures have superior performance across two e-commerce datasets while performing similarly to the baselines on two news datasets. We further evaluate in isolation the effectiveness of the different training techniques used in causal language modeling, masked language modeling, permutation language modeling and replacement token detection for a single Transformer architecture, XLNet. We establish that training XLNet with replacement token detection performs well across all datasets. Finally, we explore techniques to include side information such as item and user context features in order to establish best practices and show that the inclusion of side information uniformly improves recommendation performance. Transformers4Rec library is available at https://github.com/NVIDIA-Merlin/Transformers4Rec/) <|cite_end|>. By leveraging the powerful self-attention mechanism <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|> <|cite_start|> (Reference: On The Computational Complexity of Self-Attention: Transformer architectures have led to remarkable progress in many state-of-art applications. However, despite their successes, modern transformers rely on the self-attention mechanism, whose time- and space-complexity is quadratic in the length of the input. Several approaches have been proposed to speed up self-attention mechanisms to achieve sub-quadratic running time; however, the large majority of these works are not accompanied by rigorous error guarantees. In this work, we establish lower bounds on the computational complexity of self-attention in a number of scenarios. We prove that the time complexity of self-attention is necessarily quadratic in the input length, unless the Strong Exponential Time Hypothesis (SETH) is false. This argument holds even if the attention computation is performed only approximately, and for a variety of attention mechanisms. As a complement to our lower bounds, we show that it is indeed possible to approximate dot-product self-attention using finite Taylor series in linear-time, at the cost of having an exponential dependence on the polynomial order.) <|cite_end|>, these models have demonstrated a remarkable ability to deliver accurate predictions. However, despite their impressive performance, current transformer-based models are proven inefficient since the amount of computation grows quadratically as the length of the input sequence increases <|cite_start|> (Reference: On The Computational Complexity of Self-Attention: Transformer architectures have led to remarkable progress in many state-of-art applications. However, despite their successes, modern transformers rely on the self-attention mechanism, whose time- and space-complexity is quadratic in the length of the input. Several approaches have been proposed to speed up self-attention mechanisms to achieve sub-quadratic running time; however, the large majority of these works are not accompanied by rigorous error guarantees. In this work, we establish lower bounds on the computational complexity of self-attention in a number of scenarios. We prove that the time complexity of self-attention is necessarily quadratic in the input length, unless the Strong Exponential Time Hypothesis (SETH) is false. This argument holds even if the attention computation is performed only approximately, and for a variety of attention mechanisms. As a complement to our lower bounds, we show that it is indeed possible to approximate dot-product self-attention using finite Taylor series in linear-time, at the cost of having an exponential dependence on the polynomial order.) <|cite_end|>. Other approaches, such as RNN-based models \eg GRU4Rec <|cite_start|> (Reference: When Recurrent Neural Networks meet the Neighborhood for Session-Based
Recommendation: Deep learning methods have led to substantial progress in various application fields of AI, and in recent years a number of proposals were made to improve recommender systems with artificial neural networks. For the problem of making session-based recommendations, i.e., for recommending the next item in an anonymous session, Hidasi et al.~recently investigated the application of recurrent neural networks with Gated Recurrent Units (GRU4REC). Assessing the true effectiveness of such novel approaches based only on what is reported in the literature is however difficult when no standard evaluation protocols are applied and when the strength of the baselines used in the performance comparison is not clear. In this work we show based on a comprehensive empirical evaluation that a heuristics-based nearest neighbor (kNN) scheme for sessions outperforms GRU4REC in the large majority of the tested configurations and datasets. Neighborhood sampling and efficient in-memory data structures ensure the scalability of the kNN method. The best results in the end were often achieved when we combine the kNN approach with GRU4REC, which shows that RNNs can leverage sequential signals in the data that cannot be detected by the co-occurrence-based kNN method.) <|cite_end|> and MLP-based models \eg MLP4Rec <|cite_start|> (Reference: MLP4Rec: A Pure MLP Architecture for Sequential Recommendations: Self-attention models have achieved state-of-the-art performance in sequential recommender systems by capturing the sequential dependencies among user-item interactions. However, they rely on positional embeddings to retain the sequential information, which may break the semantics of item embeddings. In addition, most existing works assume that such sequential dependencies exist solely in the item embeddings, but neglect their existence among the item features. In this work, we propose a novel sequential recommender system (MLP4Rec) based on the recent advances of MLP-based architectures, which is naturally sensitive to the order of items in a sequence. To be specific, we develop a tri-directional fusion scheme to coherently capture sequential, cross-channel and cross-feature correlations. Extensive experiments demonstrate the effectiveness of MLP4Rec over various representative baselines upon two benchmark datasets. The simple architecture of MLP4Rec also leads to the linear computational complexity as well as much fewer model parameters than existing self-attention methods.) <|cite_end|>, are proven to be efficient due to their linear complexity. Nevertheless, they have struggled with handling long and complex patterns <|cite_start|> (Reference: Evolution of deep learning-based sequential recommender systems: from current trends to new perspectives: The recommender system which gets higher in practical use in applying the Apriori algorithm in the early 2000s has revolutionized our daily life as it currently is widely used by big-tech platform companies. In the early stages of the development of recommender systems, services that can be provided to users were simply derived to the extent that only related products were recommended. However, the new research wave like deep learning-based recommender systems due to the development of information technology and the complexity of users’ online behavior extensively grabs researchers’ and academia’s attention in the field of recommender systems. This paper describes the algorithms and characteristics of the recent popular deep learning-based representative models such as recurrent neural networks (RNNs), convolutional neural networks (CNNs), generative adversarial networks (GANs), and graph neural networks (GNNs) in the view of the sequential recommendation. The sequential recommendation of understanding user preferences in chronological order is useful for analyzing user-item interaction more accurately and flexibly. Therefore, the models specialized in sequential recommendation take advantage of understanding user behavior through temporal factors and improving recommendation quality by easily realizing the correlation between user and items. Also, the transformer-based model was developed to improve the problem of long-term dependency between users and items through factors, such as points, lines, and nodes, experienced in the early models of RNN and CNN and self-supervised learning (SSL)-based models, which are originally purposed to solve the data sparsity issues of recommender systems, will be discussed in this paper.) <|cite_end|>. All these methods above seem to have suffered from a significant trade-off between effectiveness and efficiency. Consequently, a specially designed State Space Model (SSM) called Mamba <|cite_start|> (Reference: Mamba: Linear-Time Sequence Modeling with Selective State Spaces: Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$ higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.) <|cite_end|> has been proposed. By employing simple input-dependent selection on the original SSM <|cite_start|> (Reference: Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models: Sequential recommendation aims to estimate the dynamic user preferences and sequential dependencies among historical user behaviors. Although Transformer-based models have proven to be effective for sequential recommendation, they suffer from the inference inefficiency problem stemming from the quadratic computational complexity of attention operators, especially for long behavior sequences. Inspired by the recent success of state space models (SSMs), we propose Mamba4Rec, which is the first work to explore the potential of selective SSMs for efficient sequential recommendation. Built upon the basic Mamba block which is a selective SSM with an efficient hardware-aware parallel algorithm, we design a series of sequential modeling techniques to further promote model performance while maintaining inference efficiency. Through experiments on public datasets, we demonstrate how Mamba4Rec effectively tackles the effectiveness-efficiency dilemma, outperforming both RNN- and attention-based baselines in terms of both effectiveness and efficiency. The code is available at https://github.com/chengkai-liu/Mamba4Rec.) <|cite_end|> <|cite_start|> (Reference: State-space models: Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the...) <|cite_end|>, it has demonstrated remarkable efficiency and effectiveness.
\begin{figure*}[!t]
\centering
\includegraphics[width = \linewidth]{FigureMaking/SIGMA_.pdf}
\caption{Framework of proposed SIGMA. The core part of this framework is the G-Mamba Block, which can directly tackle the context modeling and short sequence modeling challenges when introducing Mamba to SRS.}
\label{fig:Overview}
\end{figure*}
However, two significant challenges hinder the direct adoption of Mamba in SRS:
\begin{itemize}[leftmargin=*]
\item \textbf{Context Modeling}:
While previous research has demonstrated Mamba’s reliability in capturing sequential information <|cite_start|> (Reference: Mamba: Linear-Time Sequence Modeling with Selective State Spaces: Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$ higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.) <|cite_end|> <|cite_start|> (Reference: Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation: Sequential Recommenders have been widely applied in various online services, aiming to model users' dynamic interests from their sequential interactions. With users increasingly engaging with online platforms, vast amounts of lifelong user behavioral sequences have been generated. However, existing sequential recommender models often struggle to handle such lifelong sequences. The primary challenges stem from computational complexity and the ability to capture long-range dependencies within the sequence. Recently, a state space model featuring a selective mechanism (i.e., Mamba) has emerged. In this work, we investigate the performance of Mamba for lifelong sequential recommendation (i.e., length>=2k). More specifically, we leverage the Mamba block to model lifelong user sequences selectively. We conduct extensive experiments to evaluate the performance of representative sequential recommendation models in the setting of lifelong sequences. Experiments on two real-world datasets demonstrate the superiority of Mamba. We found that RecMamba achieves performance comparable to the representative model while significantly reducing training duration by approximately 70% and memory costs by 80%. Codes and data are available at \url{https://github.com/nancheng58/RecMamba}.) <|cite_end|>, its unidirectional architecture imposes significant limitations in SRS applications. By only capturing users’ past behaviors, Mamba cannot leverage future contextual information, potentially leading to an incomplete understanding of user preferences ~\cite {2,15}.
For instance, if a user consistently purchases household items but begins to show interest in sports equipment, a model that does not consider future contexts may struggle to recognize this shift, resulting in sub-optimal next-item predictions <|cite_start|> (Reference: Dual-path Mamba: Short and Long-term Bidirectional Selective Structured State Space Models for Speech Separation: Transformers have been the most successful architecture for various speech modeling tasks, including speech separation. However, the self-attention mechanism in transformers with quadratic complexity is inefficient in computation and memory. Recent models incorporate new layers and modules along with transformers for better performance but also introduce extra model complexity. In this work, we replace transformers with Mamba, a selective state space model, for speech separation. We propose dual-path Mamba, which models short-term and long-term forward and backward dependency of speech signals using selective state spaces. Our experimental results on the WSJ0-2mix data show that our dual-path Mamba models of comparably smaller sizes outperform state-of-the-art RNN model DPRNN, CNN model WaveSplit, and transformer model Sepformer. Code: https://github.com/xi-j/Mamba-TasNet) <|cite_end|> <|cite_start|> (Reference: Bidirectional Distillation for Top-K Recommender System: Recommender systems (RS) have started to employ knowledge distillation, which is a model compression technique training a compact model (student) with the knowledge transferred from a cumbersome model (teacher). The state-of-the-art methods rely on unidirectional distillation transferring the knowledge only from the teacher to the student, with an underlying assumption that the teacher is always superior to the student. However, we demonstrate that the student performs better than the teacher on a significant proportion of the test set, especially for RS. Based on this observation, we propose Bidirectional Distillation (BD) framework whereby both the teacher and the student collaboratively improve with each other. Specifically, each model is trained with the distillation loss that makes to follow the other's prediction along with its original loss function. For effective bidirectional distillation, we propose rank discrepancy-aware sampling scheme to distill only the informative knowledge that can fully enhance each other. The proposed scheme is designed to effectively cope with a large performance gap between the teacher and the student. Trained in the bidirectional way, it turns out that both the teacher and the student are significantly improved compared to when being trained separately. Our extensive experiments on real-world datasets show that our proposed framework consistently outperforms the state-of-the-art competitors. We also provide analyses for an in-depth understanding of BD and ablation studies to verify the effectiveness of each proposed component.) <|cite_end|>.
\item \textbf{Short Sequence Modeling}:
This challenge is primarily driven by the long-tail user problem, a common issue in sequential recommendation. Long-tail users refer to such users who interact with only a few items but typically receive lower-quality recommendations compared to the normal ones <|cite_start|> (Reference: Sequential and Diverse Recommendation with Long Tail.: Sequential recommendation is a task that learns a temporal dynamic of a user behavior in sequential data and predicts items that a user would like afterward. However, diversity has been rarely emphasized in the context of sequential recommendation. Sequential and diverse recommendation must learn temporal preference on diverse items as well as on general items. Thus, we propose a sequential and diverse recommendation model that predicts a ranked list containing general items and also diverse items without compromising significant accuracy.To learn temporal preference on diverse items as well as on general items, we cluster and relocate consumed long tail items to make a pseudo ground truth for diverse items and learn the preference on long tail using recurrent neural network, which enables us to directly learn a ranking function. Extensive online and offline experiments deployed on a commercial platform demonstrate that our models significantly increase diversity while preserving accuracy compared to the state-of-the-art sequential recommendation model, and consequently our models improve user satisfaction.) <|cite_end|> <|cite_start|> (Reference: Sequential and Diverse Recommendation with Long Tail.: Sequential recommendation is a task that learns a temporal dynamic of a user behavior in sequential data and predicts items that a user would like afterward. However, diversity has been rarely emphasized in the context of sequential recommendation. Sequential and diverse recommendation must learn temporal preference on diverse items as well as on general items. Thus, we propose a sequential and diverse recommendation model that predicts a ranked list containing general items and also diverse items without compromising significant accuracy.To learn temporal preference on diverse items as well as on general items, we cluster and relocate consumed long tail items to make a pseudo ground truth for diverse items and learn the preference on long tail using recurrent neural network, which enables us to directly learn a ranking function. Extensive online and offline experiments deployed on a commercial platform demonstrate that our models significantly increase diversity while preserving accuracy compared to the state-of-the-art sequential recommendation model, and consequently our models improve user satisfaction.) <|cite_end|>. Furthermore, the instability in state estimation caused by limited data in short sequences <|cite_start|> (Reference: Mamba: Linear-Time Sequence Modeling with Selective State Spaces: Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module. Many subquadratic-time architectures such as linear attention, gated convolution and recurrent models, and structured state space models (SSMs) have been developed to address Transformers' computational inefficiency on long sequences, but they have not performed as well as attention on important modalities such as language. We identify that a key weakness of such models is their inability to perform content-based reasoning, and make several improvements. First, simply letting the SSM parameters be functions of the input addresses their weakness with discrete modalities, allowing the model to selectively propagate or forget information along the sequence length dimension depending on the current token. Second, even though this change prevents the use of efficient convolutions, we design a hardware-aware parallel algorithm in recurrent mode. We integrate these selective SSMs into a simplified end-to-end neural network architecture without attention or even MLP blocks (Mamba). Mamba enjoys fast inference (5$\times$ higher throughput than Transformers) and linear scaling in sequence length, and its performance improves on real data up to million-length sequences. As a general sequence model backbone, Mamba achieves state-of-the-art performance across several modalities such as language, audio, and genomics. On language modeling, our Mamba-3B model outperforms Transformers of the same size and matches Transformers twice its size, both in pretraining and downstream evaluation.) <|cite_end|> <|cite_start|> (Reference: Simplified State Space Layers for Sequence Modeling: Models using structured state space sequence (S4) layers have achieved state-of-the-art performance on long-range sequence modeling tasks. An S4 layer combines linear state space models (SSMs), the HiPPO framework, and deep learning to achieve high performance. We build on the design of the S4 layer and introduce a new state space layer, the S5 layer. Whereas an S4 layer uses many independent single-input, single-output SSMs, the S5 layer uses one multi-input, multi-output SSM. We establish a connection between S5 and S4, and use this to develop the initialization and parameterization used by the S5 model. The result is a state space layer that can leverage efficient and widely implemented parallel scans, allowing S5 to match the computational efficiency of S4, while also achieving state-of-the-art performance on several long-range sequence modeling tasks. S5 averages 87.4% on the long range arena benchmark, and 98.5% on the most difficult Path-X task.) <|cite_end|> exacerbates this problem when Mamba is directly applied to SRS, highlighting the need for effective short sequence modeling. For illustration, we compare two leading baselines, Mamba4Rec <|cite_start|> (Reference: Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models: Sequential recommendation aims to estimate the dynamic user preferences and sequential dependencies among historical user behaviors. Although Transformer-based models have proven to be effective for sequential recommendation, they suffer from the inference inefficiency problem stemming from the quadratic computational complexity of attention operators, especially for long behavior sequences. Inspired by the recent success of state space models (SSMs), we propose Mamba4Rec, which is the first work to explore the potential of selective SSMs for efficient sequential recommendation. Built upon the basic Mamba block which is a selective SSM with an efficient hardware-aware parallel algorithm, we design a series of sequential modeling techniques to further promote model performance while maintaining inference efficiency. Through experiments on public datasets, we demonstrate how Mamba4Rec effectively tackles the effectiveness-efficiency dilemma, outperforming both RNN- and attention-based baselines in terms of both effectiveness and efficiency. The code is available at https://github.com/chengkai-liu/Mamba4Rec.) <|cite_end|> and SASRec <|cite_start|> (Reference: Self-Attentive Sequential Recommendation: Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the `context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are `relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences.) <|cite_end|>, against our proposed framework on the Beauty dataset. As shown in Figure~\ref{fig:intro}, the histogram depicts the number of users in each group, while the line represents recommendation performance in terms of Hit@10. SASRec outperforms Mamba4Rec in the first three groups, demonstrating Mamba4Rec's exacerbation of the long-tail user problem.
\end{itemize}
To address these challenges and better leverage Mamba's strengths, we propose an innovative framework called \textbf{\underline{S}}elect\textbf{\underline{I}}ve \textbf{\underline{G}}ated \textbf{\underline{MA}}mba for Sequential Recommendation (SIGMA). Our approach introduces the Partially Flipped Mamba (PF-Mamba), a specialized bidirectional structure that captures contextual information <|cite_start|> (Reference: Mamba4Rec: Towards Efficient Sequential Recommendation with Selective State Space Models: Sequential recommendation aims to estimate the dynamic user preferences and sequential dependencies among historical user behaviors. Although Transformer-based models have proven to be effective for sequential recommendation, they suffer from the inference inefficiency problem stemming from the quadratic computational complexity of attention operators, especially for long behavior sequences. Inspired by the recent success of state space models (SSMs), we propose Mamba4Rec, which is the first work to explore the potential of selective SSMs for efficient sequential recommendation. Built upon the basic Mamba block which is a selective SSM with an efficient hardware-aware parallel algorithm, we design a series of sequential modeling techniques to further promote model performance while maintaining inference efficiency. Through experiments on public datasets, we demonstrate how Mamba4Rec effectively tackles the effectiveness-efficiency dilemma, outperforming both RNN- and attention-based baselines in terms of both effectiveness and efficiency. The code is available at https://github.com/chengkai-liu/Mamba4Rec.) <|cite_end|> <|cite_start|> (Reference: Dual-path Mamba: Short and Long-term Bidirectional Selective Structured State Space Models for Speech Separation: Transformers have been the most successful architecture for various speech modeling tasks, including speech separation. However, the self-attention mechanism in transformers with quadratic complexity is inefficient in computation and memory. Recent models incorporate new layers and modules along with transformers for better performance but also introduce extra model complexity. In this work, we replace transformers with Mamba, a selective state space model, for speech separation. We propose dual-path Mamba, which models short-term and long-term forward and backward dependency of speech signals using selective state spaces. Our experimental results on the WSJ0-2mix data show that our dual-path Mamba models of comparably smaller sizes outperform state-of-the-art RNN model DPRNN, CNN model WaveSplit, and transformer model Sepformer. Code: https://github.com/xi-j/Mamba-TasNet) <|cite_end|>. We then introduce an input-dependent Dense Selective Gate (DS Gate) to allocate the weights of the two directions and further filter the information. Additionally, we develop a Feature Extract GRU (FE-GRU) to better model short-term patterns in interaction sequences <|cite_start|> (Reference: Session-based Recommendations with Recurrent Neural Networks: We apply recurrent neural networks (RNN) on a new domain, namely recommender systems. Real-life recommender systems often face the problem of having to base recommendations only on short session-based data (e.g. a small sportsware website) instead of long user histories (as in the case of Netflix). In this situation the frequently praised matrix factorization approaches are not accurate. This problem is usually overcome in practice by resorting to item-to-item recommendations, i.e. recommending similar items. We argue that by modeling the whole session, more accurate recommendations can be provided. We therefore propose an RNN-based approach for session-based recommendations. Our approach also considers practical aspects of the task and introduces several modifications to classic RNNs such as a ranking loss function that make it more viable for this specific problem. Experimental results on two data-sets show marked improvements over widely used approaches.) <|cite_end|>, offering a possible solution to the long-tail user problem.
Our contributions are summarized as follows::
\begin{itemize}[leftmargin=*]
\item We identify the limitations of Mamba when applied to SRS, attributing them to its unidirectional structure and instability in state estimation for short sequences.
\item We introduce SIGMA, a novel framework featuring a Partially Flipped Mamba with a Dense Selective Gate and a Feature Extract GRU, which respectively address the challenges of context modeling and short sequence modeling.
\item We validate SIGMA's performance on five public real-world datasets, demonstrating its superiority.
\end{itemize}
Related Work
\subsection{Sequential Recommendation}
Advancements in deep learning have transformed recommendation systems, making them more personalized and accurate in next-item prediction <|cite_start|> (Reference: Diffusion Augmentation for Sequential Recommendation: Sequential recommendation (SRS) has become the technical foundation in many applications recently, which aims to recommend the next item based on the user's historical interactions. However, sequential recommendation often faces the problem of data sparsity, which widely exists in recommender systems. Besides, most users only interact with a few items, but existing SRS models often underperform these users. Such a problem, named the long-tail user problem, is still to be resolved. Data augmentation is a distinct way to alleviate these two problems, but they often need fabricated training strategies or are hindered by poor-quality generated interactions. To address these problems, we propose a Diffusion Augmentation for Sequential Recommendation (DiffuASR) for a higher quality generation. The augmented dataset by DiffuASR can be used to train the sequential recommendation models directly, free from complex training procedures. To make the best of the generation ability of the diffusion model, we first propose a diffusion-based pseudo sequence generation framework to fill the gap between image and sequence generation. Then, a sequential U-Net is designed to adapt the diffusion noise prediction model U-Net to the discrete sequence generation task. At last, we develop two guide strategies to assimilate the preference between generated and origin sequences. To validate the proposed DiffuASR, we conduct extensive experiments on three real-world datasets with three sequential recommendation models. The experimental results illustrate the effectiveness of DiffuASR. As far as we know, DiffuASR is one pioneer that introduce the diffusion model to the recommendation.) <|cite_end|> <|cite_start|> (Reference: Disentangling interest and conformity for eliminating popularity bias in session-based recommendation: ) <|cite_end|> <|cite_start|> (Reference: Large Language Models Enhanced Sequential Recommendation for Long-tail User and Item: Sequential recommendation systems (SRS) serve the purpose of predicting users' subsequent preferences based on their past interactions and have been applied across various domains such as e-commerce and social networking platforms. However, practical SRS encounters challenges due to the fact that most users engage with only a limited number of items, while the majority of items are seldom consumed. These challenges, termed as the long-tail user and long-tail item dilemmas, often create obstacles for traditional SRS methods. Mitigating these challenges is crucial as they can significantly impact user satisfaction and business profitability. While some research endeavors have alleviated these issues, they still grapple with issues such as seesaw or noise stemming from the scarcity of interactions. The emergence of large language models (LLMs) presents a promising avenue to address these challenges from a semantic standpoint. In this study, we introduce the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR), which leverages semantic embeddings from LLMs to enhance SRS performance without increasing computational overhead. To combat the long-tail item challenge, we propose a dual-view modeling approach that fuses semantic information from LLMs with collaborative signals from traditional SRS. To address the long-tail user challenge, we introduce a retrieval augmented self-distillation technique to refine user preference representations by incorporating richer interaction data from similar users. Through comprehensive experiments conducted on three authentic datasets using three widely used SRS models, our proposed enhancement framework demonstrates superior performance compared to existing methodologies.) <|cite_end|> <|cite_start|> (Reference: MLP4Rec: A Pure MLP Architecture for Sequential Recommendations: Self-attention models have achieved state-of-the-art performance in sequential recommender systems by capturing the sequential dependencies among user-item interactions. However, they rely on positional embeddings to retain the sequential information, which may break the semantics of item embeddings. In addition, most existing works assume that such sequential dependencies exist solely in the item embeddings, but neglect their existence among the item features. In this work, we propose a novel sequential recommender system (MLP4Rec) based on the recent advances of MLP-based architectures, which is naturally sensitive to the order of items in a sequence. To be specific, we develop a tri-directional fusion scheme to coherently capture sequential, cross-channel and cross-feature correlations. Extensive experiments demonstrate the effectiveness of MLP4Rec over various representative baselines upon two benchmark datasets. The simple architecture of MLP4Rec also leads to the linear computational complexity as well as much fewer model parameters than existing self-attention methods.) <|cite_end|> <|cite_start|> (Reference: SMLP4Rec: An Efficient all-MLP Architecture for Sequential Recommendations: Self-attention models have achieved the state-of-the-art performance in sequential recommender systems by capturing the sequential dependencies among user–item interactions. However, they rely on adding positional embeddings to the item sequence to retain the sequential information, which may break the semantics of item embeddings due to the heterogeneity between these two types of embeddings. In addition, most existing works assume that such dependencies exist solely in the item embeddings, but neglect their existence among the item features. In our previous study, we proposed a novel sequential recommendation model, i.e., MLP4Rec, based on the recent advances of MLP-Mixer architectures, which is naturally sensitive to the order of items in a sequence because matrix elements related to different positions of a sequence will be given different weights in training. We developed a tri-directional fusion scheme to coherently capture sequential, cross-channel, and cross-feature correlations with linear computational complexity as well as much fewer model parameters than existing self-attention methods. However, the cascading mixer structure, the large number of normalization layers between different mixer layers, and the noise generated by these operations limit the efficiency of information extraction and the effectiveness of MLP4Rec. In this extended version, we propose a novel framework – SMLP4Rec for sequential recommendation to address the aforementioned issues. The new framework changes the flawed cascading structure to a parallel mode, and integrates normalization layers to minimize their impact on the model’s efficiency while maximizing their effectiveness. As a result, the training speed and prediction accuracy of SMLP4Rec are vastly improved in comparison to MLP4Rec. Extensive experimental results demonstrate that the proposed method is significantly superior to the state-of-the-art approaches. The implementation code is available online to ease reproducibility.) <|cite_end|> <|cite_start|> (Reference: E4SRec: An Elegant Effective Efficient Extensible Solution of Large Language Models for Sequential Recommendation: The recent advancements in Large Language Models (LLMs) have sparked interest in harnessing their potential within recommender systems. Since LLMs are designed for natural language tasks, existing recommendation approaches have predominantly transformed recommendation tasks into open-domain natural language generation tasks. However, this approach necessitates items to possess rich semantic information, often generates out-of-range results, and suffers from notably low efficiency and limited extensibility. Furthermore, practical ID-based recommendation strategies, reliant on a huge number of unique identities (IDs) to represent users and items, have gained prominence in real-world recommender systems due to their effectiveness and efficiency. Nevertheless, the incapacity of LLMs to model IDs presents a formidable challenge when seeking to leverage LLMs for personalized recommendations. In this paper, we introduce an Elegant Effective Efficient Extensible solution for large language models for Sequential Recommendation (E4SRec), which seamlessly integrates LLMs with traditional recommender systems that exclusively utilize IDs to represent items. Specifically, E4SRec takes ID sequences as inputs, ensuring that the generated outputs fall within the candidate lists. Furthermore, E4SRec possesses the capability to generate the entire ranking list in a single forward process, and demands only a minimal set of pluggable parameters, which are trained for each dataset while keeping the entire LLM frozen. We substantiate the effectiveness, efficiency, and extensibility of our proposed E4SRec through comprehensive experiments conducted on four widely-used real-world datasets. The implementation code is accessible at https://github.com/HestiaSky/E4SRec/.) <|cite_end|> <|cite_start|> (Reference: STRec: Sparse Transformer for Sequential Recommendations: With the rapid evolution of transformer architectures, researchers are exploring their application in sequential recommender systems (SRSs) and presenting promising performance on SRS tasks compared with former SRS models. However, most existing transformer-based SRS frameworks retain the vanilla attention mechanism, which calculates the attention scores between all item-item pairs. With this setting, redundant item interactions can harm the model performance and consume much computation time and memory. In this paper, we identify the sparse attention phenomenon in transformer-based SRS models and propose Sparse Transformer for sequential Recommendation tasks (STRec) to achieve the efficient computation and improved performance. Specifically, we replace self-attention with cross-attention, making the model concentrate on the most relevant item interactions. To determine these necessary interactions, we design a novel sampling strategy to detect relevant items based on temporal information. Extensive experimental results validate the effectiveness of STRec, which achieves the state-of-the-art accuracy while reducing 54% inference time and 70% memory cost. We also provide massive extended experiments to further investigate the property of our framework.) <|cite_end|> <|cite_start|> (Reference: Sim2Rec: A Simulator-based Decision-making Approach to Optimize Real-World Long-term User Engagement in Sequential Recommender Systems: Long-term user engagement (LTE) optimization in sequential recommender systems (SRS) is shown to be suited by reinforcement learning (RL) which finds a policy to maximize long-term rewards. Meanwhile, RL has its shortcomings, particularly requiring a large number of online samples for exploration, which is risky in real-world applications. One of the appealing ways to avoid the risk is to build a simulator and learn the optimal recommendation policy in the simulator. In LTE optimization, the simulator is to simulate multiple users' daily feedback for given recommendations. However, building a user simulator with no reality-gap, i.e., can predict user's feedback exactly, is unrealistic because the users' reaction patterns are complex and historical logs for each user are limited, which might mislead the simulator-based recommendation policy. In this paper, we present a practical simulator-based recommender policy training approach, Simulation-to-Recommendation (Sim2Rec) to handle the reality-gap problem for LTE optimization. Specifically, Sim2Rec introduces a simulator set to generate various possibilities of user behavior patterns, then trains an environment-parameter extractor to recognize users' behavior patterns in the simulators. Finally, a context-aware policy is trained to make the optimal decisions on all of the variants of the users based on the inferred environment-parameters. The policy is transferable to unseen environments (e.g., the real world) directly as it has learned to recognize all various user behavior patterns and to make the correct decisions based on the inferred environment-parameters. Experiments are conducted in synthetic environments and a real-world large-scale ride-hailing platform, DidiChuxing. The results show that Sim2Rec achieves significant performance improvement, and produces robust recommendations in unseen environments.) <|cite_end|> <|cite_start|> (Reference: Diffusion-based Contrastive Learning for Sequential Recommendation: Self-supervised contrastive learning, which directly extracts inherent data correlations from unlabeled data, has been widely utilized to mitigate the data sparsity issue in sequential recommendation. The majority of existing methods create different augmented views of the same user sequence via random augmentation, and subsequently minimize their distance in the embedding space to enhance the quality of user representations. However, random augmentation often disrupts the semantic information and interest evolution pattern inherent in the user sequence, leading to the generation of semantically distinct augmented views. Promoting similarity of these semantically diverse augmented sequences can render the learned user representations insensitive to variations in user preferences and interest evolution, contradicting the core learning objectives of sequential recommendation. To address this issue, we leverage the inherent characteristics of sequential recommendation and propose the use of context information to generate more reasonable augmented positive samples. Specifically, we introduce a context-aware diffusion-based contrastive learning method for sequential recommendation. Given a user sequence, our method selects certain positions and employs a context-aware diffusion model to generate alternative items for these positions with the guidance of context information. These generated items then replace the corresponding original items, creating a semantically consistent augmented view of the original sequence. Additionally, to maintain representation cohesion, item embeddings are shared between the diffusion model and the recommendation model, and the entire framework is trained in an end-to-end manner. Extensive experiments on five benchmark datasets demonstrate the superiority of our proposed method.) <|cite_end|>. Early sequential recommendation frameworks have adopted CNNs and RNNs to capture users' preferences but faced issues like catastrophic forgetting when dealing with long-term dependencies <|cite_start|> (Reference: Transformers4Rec: Bridging the Gap between NLP and Sequential/Session-Based Recommendation: Much of the recent progress in sequential and session-based recommendation has been driven by improvements in model architecture and pretraining techniques originating in the field of Natural Language Processing. Transformer architectures in particular have facilitated building higher-capacity models and provided data augmentation and training techniques which demonstrably improve the effectiveness of sequential recommendation. But with a thousandfold more research going on in NLP, the application of transformers for recommendation understandably lags behind. To remedy this we introduce Transformers4Rec, an open-source library built upon HuggingFace’s Transformers library with a similar goal of opening up the advances of NLP based Transformers to the recommender system community and making these advancements immediately accessible for the tasks of sequential and session-based recommendation. Like its core dependency, Transformers4Rec is designed to be extensible by researchers, simple for practitioners, and fast and robust in industrial deployments. In order to demonstrate the usefulness of the library and the applicability of Transformer architectures in next-click prediction for user sessions, where sequence lengths are much shorter than those commonly found in NLP, we have leveraged Transformers4Rec to win two recent session-based recommendation competitions. In addition, we present in this paper the first comprehensive empirical analysis comparing many Transformer architectures and training approaches for the task of session-based recommendation. We demonstrate that the best Transformer architectures have superior performance across two e-commerce datasets while performing similarly to the baselines on two news datasets. We further evaluate in isolation the effectiveness of the different training techniques used in causal language modeling, masked language modeling, permutation language modeling and replacement token detection for a single Transformer architecture, XLNet. We establish that training XLNet with replacement token detection performs well across all datasets. Finally, we explore techniques to include side information such as item and user context features in order to establish best practices and show that the inclusion of side information uniformly improves recommendation performance. Transformers4Rec library is available at https://github.com/NVIDIA-Merlin/Transformers4Rec/) <|cite_end|> <|cite_start|> (Reference: Sequential and Diverse Recommendation with Long Tail.: Sequential recommendation is a task that learns a temporal dynamic of a user behavior in sequential data and predicts items that a user would like afterward. However, diversity has been rarely emphasized in the context of sequential recommendation. Sequential and diverse recommendation must learn temporal preference on diverse items as well as on general items. Thus, we propose a sequential and diverse recommendation model that predicts a ranked list containing general items and also diverse items without compromising significant accuracy.To learn temporal preference on diverse items as well as on general items, we cluster and relocate consumed long tail items to make a pseudo ground truth for diverse items and learn the preference on long tail using recurrent neural network, which enables us to directly learn a ranking function. Extensive online and offline experiments deployed on a commercial platform demonstrate that our models significantly increase diversity while preserving accuracy compared to the state-of-the-art sequential recommendation model, and consequently our models improve user satisfaction.) <|cite_end|>. Then, the transformer-based models have emerged as powerful methods with their self-attention mechanism, significantly improving performance by selectively capturing the complex user-item interactions <|cite_start|> (Reference: Self-Attentive Sequential Recommendation: Sequential dynamics are a key feature of many modern recommender systems, which seek to capture the `context' of users' activities on the basis of actions they have performed recently. To capture such patterns, two approaches have proliferated: Markov Chains (MCs) and Recurrent Neural Networks (RNNs). Markov Chains assume that a user's next action can be predicted on the basis of just their last (or last few) actions, while RNNs in principle allow for longer-term semantics to be uncovered. Generally speaking, MC-based methods perform best in extremely sparse datasets, where model parsimony is critical, while RNNs perform better in denser datasets where higher model complexity is affordable. The goal of our work is to balance these two goals, by proposing a self-attention based sequential model (SASRec) that allows us to capture long-term semantics (like an RNN), but, using an attention mechanism, makes its predictions based on relatively few actions (like an MC). At each time step, SASRec seeks to identify which items are `relevant' from a user's action history, and use them to predict the next item. Extensive empirical studies show that our method outperforms various state-of-the-art sequential models (including MC/CNN/RNN-based approaches) on both sparse and dense datasets. Moreover, the model is an order of magnitude more efficient than comparable CNN/RNN-based models. Visualizations on attention weights also show how our model adaptively handles datasets with various density, and uncovers meaningful patterns in activity sequences.) <|cite_end|>. However, they have suffered from inefficiency due to the quadratic computational complexity <|cite_start|> (Reference: On The Computational Complexity of Self-Attention: Transformer architectures have led to remarkable progress in many state-of-art applications. However, despite their successes, modern transformers rely on the self-attention mechanism, whose time- and space-complexity is quadratic in the length of the input. Several approaches have been proposed to speed up self-attention mechanisms to achieve sub-quadratic running time; however, the large majority of these works are not accompanied by rigorous error guarantees. In this work, we establish lower bounds on the computational complexity of self-attention in a number of scenarios. We prove that the time complexity of self-attention is necessarily quadratic in the input length, unless the Strong Exponential Time Hypothesis (SETH) is false. This argument holds even if the attention computation is performed only approximately, and for a variety of attention mechanisms. As a complement to our lower bounds, we show that it is indeed possible to approximate dot-product self-attention using finite Taylor series in linear-time, at the cost of having an exponential dependence on the polynomial order.) <|cite_end|>. Therefore, to address the trade-off between effectiveness and efficiency, we propose SIGMA, a novel framework that achieves remarkable performance.
\subsection{Selective State Space Model}
Currently, SSM-based models have been proven effective in time-series prediction due to their ability to capture the hidden dynamics <|cite_start|> (Reference: State-space models: Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the...) <|cite_end|>. To further address the issues of catastrophic forgetting and long-term dependency in sequential processing, a special SSM called Mamba was introduced. Originating from the structured state space model (S4) <|cite_start|> (Reference: Simplified State Space Layers for Sequence Modeling: Models using structured state space sequence (S4) layers have achieved state-of-the-art performance on long-range sequence modeling tasks. An S4 layer combines linear state space models (SSMs), the HiPPO framework, and deep learning to achieve high performance. We build on the design of the S4 layer and introduce a new state space layer, the S5 layer. Whereas an S4 layer uses many independent single-input, single-output SSMs, the S5 layer uses one multi-input, multi-output SSM. We establish a connection between S5 and S4, and use this to develop the initialization and parameterization used by the S5 model. The result is a state space layer that can leverage efficient and widely implemented parallel scans, allowing S5 to match the computational efficiency of S4, while also achieving state-of-the-art performance on several long-range sequence modeling tasks. S5 averages 87.4% on the long range arena benchmark, and 98.5% on the most difficult Path-X task.) <|cite_end|> <|cite_start|> (Reference: State-space models: Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the...) <|cite_end|>, Mamba has been proven to provide Transformer-level performance with better efficiency, particularly for long sequences <|cite_start|> (Reference: Uncovering Selective State Space Model's Capabilities in Lifelong Sequential Recommendation: Sequential Recommenders have been widely applied in various online services, aiming to model users' dynamic interests from their sequential interactions. With users increasingly engaging with online platforms, vast amounts of lifelong user behavioral sequences have been generated. However, existing sequential recommender models often struggle to handle such lifelong sequences. The primary challenges stem from computational complexity and the ability to capture long-range dependencies within the sequence. Recently, a state space model featuring a selective mechanism (i.e., Mamba) has emerged. In this work, we investigate the performance of Mamba for lifelong sequential recommendation (i.e., length>=2k). More specifically, we leverage the Mamba block to model lifelong user sequences selectively. We conduct extensive experiments to evaluate the performance of representative sequential recommendation models in the setting of lifelong sequences. Experiments on two real-world datasets demonstrate the superiority of Mamba. We found that RecMamba achieves performance comparable to the representative model while significantly reducing training duration by approximately 70% and memory costs by 80%. Codes and data are available at \url{https://github.com/nancheng58/RecMamba}.) <|cite_end|>. However, it still suffers from some challenges when adopted in the realm of recommendation \ie context modeling and short sequence modeling, which are mainly caused by Mamba's original structure and the inflexibility in hidden state transferring. Correspondingly, we introduce a special bi-directional module called Partially Flipped Mamba and a Feature Extract GRU in our SIGMA framework, which somewhat addresses these problems and explores a novel way to leverage Mamba in SRS. <|paper_end|> | [
"<|reference_start|> Sequential and Diverse Recommendation with Long Tail.: Sequential recommendation is a task that learns a temporal dynamic of a user behavior in sequential data and predicts items that a user would like afterward. However, diversity has been rarely emphasized in the context of sequential recommendation. Sequential and diverse recommendation must learn temporal preference on diverse items as well as on general items. Thus, we propose a sequential and diverse recommendation model that predicts a ranked list containing general items and also diverse items without compromising significant accuracy.To learn temporal preference on diverse items as well as on general items, we cluster and relocate consumed long tail items to make a pseudo ground truth for diverse items and learn the preference on long tail using recurrent neural network, which enables us to directly learn a ranking function. Extensive online and offline experiments deployed on a commercial platform demonstrate that our models significantly increase diversity while preserving accuracy compared to the state-of-the-art sequential recommendation model, and consequently our models improve user satisfaction. <|reference_end|>",
"<|reference_start|> E4SRec: An Elegant Effective Efficient Extensible Solution of Large Language Models for Sequential Recommendation: The recent advancements in Large Language Models (LLMs) have sparked interest in harnessing their potential within recommender systems. Since LLMs are designed for natural language tasks, existing recommendation approaches have predominantly transformed recommendation tasks into open-domain natural language generation tasks. However, this approach necessitates items to possess rich semantic information, often generates out-of-range results, and suffers from notably low efficiency and limited extensibility. Furthermore, practical ID-based recommendation strategies, reliant on a huge number of unique identities (IDs) to represent users and items, have gained prominence in real-world recommender systems due to their effectiveness and efficiency. Nevertheless, the incapacity of LLMs to model IDs presents a formidable challenge when seeking to leverage LLMs for personalized recommendations. In this paper, we introduce an Elegant Effective Efficient Extensible solution for large language models for Sequential Recommendation (E4SRec), which seamlessly integrates LLMs with traditional recommender systems that exclusively utilize IDs to represent items. Specifically, E4SRec takes ID sequences as inputs, ensuring that the generated outputs fall within the candidate lists. Furthermore, E4SRec possesses the capability to generate the entire ranking list in a single forward process, and demands only a minimal set of pluggable parameters, which are trained for each dataset while keeping the entire LLM frozen. We substantiate the effectiveness, efficiency, and extensibility of our proposed E4SRec through comprehensive experiments conducted on four widely-used real-world datasets. The implementation code is accessible at https://github.com/HestiaSky/E4SRec/. <|reference_end|>",
"<|reference_start|> Simplified State Space Layers for Sequence Modeling: Models using structured state space sequence (S4) layers have achieved state-of-the-art performance on long-range sequence modeling tasks. An S4 layer combines linear state space models (SSMs), the HiPPO framework, and deep learning to achieve high performance. We build on the design of the S4 layer and introduce a new state space layer, the S5 layer. Whereas an S4 layer uses many independent single-input, single-output SSMs, the S5 layer uses one multi-input, multi-output SSM. We establish a connection between S5 and S4, and use this to develop the initialization and parameterization used by the S5 model. The result is a state space layer that can leverage efficient and widely implemented parallel scans, allowing S5 to match the computational efficiency of S4, while also achieving state-of-the-art performance on several long-range sequence modeling tasks. S5 averages 87.4% on the long range arena benchmark, and 98.5% on the most difficult Path-X task. <|reference_end|>",
"<|reference_start|> State-space models: Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the... <|reference_end|>"
] | [
18,
31,
40,
41
] | {"<|cite_1|>": "ss-953223", "<|cite_2|>": "ss-2282379", "<|cite_3|>": "arxiv-202892", "<|cite_4|>": "ss-792359", "<|multi_cite_5_1|>": "arxiv-126595", "<|multi_cite_5_2|>": "arxiv-445432", "<|cite_6|>": "arxiv-445432", "<|cite_7|>": "ss-1291865", "<|cite_8|>": "arxiv-415255", "<|cite_9|>": "ss-2451530", "<|cite_10|>": "arxiv-563970", "<|multi_cite_11_1|>": "arxiv-592710", "<|multi_cite_11_2|>": "ss-1187121", "<|multi_cite_12_1|>": "arxiv-563970", "<|multi_cite_12_2|>": "arxiv-599479", "<|multi_cite_13_1|>": "arxiv-600479", "<|multi_cite_13_2|>": "arxiv-345977", "<|multi_cite_14_1|>": "ss-1225235", "<|multi_cite_14_2|>": "ss-1225235", "<|multi_cite_15_1|>": "arxiv-563970", "<|multi_cite_15_2|>": "arxiv-439292", "<|cite_16|>": "arxiv-592710", "<|cite_17|>": "arxiv-170660", "<|multi_cite_18_1|>": "arxiv-592710", "<|multi_cite_18_2|>": "arxiv-600479", "<|cite_19|>": "arxiv-87783", "<|multi_cite_20_1|>": "arxiv-541820", "<|multi_cite_20_2|>": "ss-1173303", "<|multi_cite_20_3|>": "arxiv-622294", "<|multi_cite_20_4|>": "arxiv-415255", "<|multi_cite_20_5|>": "ss-1845954", "<|multi_cite_20_6|>": "arxiv-564807", "<|multi_cite_20_7|>": "ss-1845953", "<|multi_cite_20_8|>": "arxiv-503121", "<|multi_cite_20_9|>": "arxiv-616257", "<|multi_cite_21_1|>": "ss-792359", "<|multi_cite_21_2|>": "ss-1225235", "<|cite_22|>": "arxiv-170660", "<|cite_23|>": "arxiv-445432", "<|cite_24|>": "ss-1187121", "<|multi_cite_25_1|>": "arxiv-439292", "<|multi_cite_25_2|>": "ss-1187121", "<|cite_26|>": "arxiv-599479"} |
2106.00840 | <|paper_start|> Title: Comparing Test Sets with Item Response Theory
Abstract: Comparing Test Sets with Item Response Theory: Recent years have seen numerous NLP datasets introduced to evaluate the performance of fine-tuned models on natural language understanding tasks. Recent results from large pretrained models, though, show that many of these datasets are largely saturated and unlikely to be able to detect further progress. What kind of datasets are still effective at discriminating among strong models, and what kind of datasets should we expect to be able to detect future improvements? To measure this uniformly across datasets, we draw on Item Response Theory and evaluate 29 datasets using predictions from 18 pretrained Transformer models on individual test examples. We find that Quoref, HellaSwag, and MC-TACO are best suited for distinguishing among state-of-the-art models, while SNLI, MNLI, and CommitmentBank seem to be saturated for current strong models. We also observe span selection task format, which is used for QA datasets like QAMR or SQuAD2.0, is effective in differentiating between strong and weak models.
Introduction
\label{sec:intro}
\begin{figure*}[t]
\centering
\includegraphics[width=0.95\linewidth]{img/irt-score_box.png}
\caption{Distribution of test examples according to our proposed \textit{locally estimated headroom} (LEH) scores (\textsection~\ref{sec:item-params}), which measure the \textbf{local slope} of the Item Characteristic Curve (ICC) for an example at the ability level corresponding to the \textbf{best model}, and thus reflect the effectiveness of that single example at distinguishing between near-state-of-the-art models. Datasets are grouped by task format: classification (green), sentence-level multiple-choice (blue), paragraph-level multiple-choice (red), and span selection (grey). Within each format, the datasets are sorted by their release date. More details on the datasets are given in Table \ref{tab:tasks}.}
\label{fig:irt-score-box}
\end{figure*}
Many datasets have been created to evaluate various aspects of natural language understanding (NLU) in English.
These datasets are useful to measure progress; however, it is evident from various leaderboards <|cite_start|> (Reference: {{GLUE: سابقه و هدف: واسنجی مدلهای نیمهتوزیعی- فیزیکی هیدرولوژیکی به دلیل عدم قطعیت در پارامترهای زیاد مدل و عدم توانایی در اندازهگیری توزیعی خصوصیات فیزیکی در سطح حوضه آبخیز، منجر به افزایش عدم قطعیت در بهینهسازی پارامترها میشود. لذا، به-منظور کاربرد موفقیتآمیز مدلهای هیدرولوژیکی در تحقیقات کاربردی منابع آب، واسنجی دقیق و تجزیه و تحلیل عدم قطعیت پیشبینی ضروری است (40 و 43). شن و همکاران (2012) در تحلیل پارامترهای عدم قطعیت در مدلسازی هیدرولوژیکی و رسوب با مدل SWAT در منطقهای از چین به این نتیجه رسیدند که فقط تعداد محدودی پارامتر بر خروجی مدل اثر قابل توجهی دارند (34). شاپ و همکاران (2014) در شبیهسازی وقایع تک رخداد و طولانی مدت دوره بارش مانسون با استفاده از مدل SWAT بر رواناب نتیجه گرفتند که در نواحی مرتفع و شیبدار آب پایه غالب است، در صورتیکه در نواحی با ارتفاع کمتر، رواناب سطحی مؤثرتر است (35). در این مقاله، با استفاده از روش عدم قطعیت درستنمایی تعمیم یافته ( GLUE) در مدل SWAT ، کمیت ورودی و خروجی جریان ماهانه در حوضه شرقی رودخانه گرگانرود به مساحت 7072 کیلومترمربع برآورد شد. مواد و روشها: ایستگاه هیدرومتری قزاقلی یکی از ایستگاههای آب منطقهای گلستان جهت شبیهسازی رواناب ماهانه انتخاب شد. بارش سالیانه از غرب به شرق، از 800 میلیمتر به 200 میلیمتر، و جنوب به شمال کاهش مییابد. در این تحقیق اجرای مدل در مقیاس زمانی ماهانه و از سال 1983 تا 1993 انجام شد. به طوری که سال آبی 1984 تا 1990 به عنوان دوره واسنجی و سال آبی 1991 تا 1993 برای دوره اعتبار سنجی انتخاب گردید.یافتهها: در مدلهای توزیعی و نیمه توزیعی مانند SWAT، جهت بهدست آوردن خروجی بهتر، شناسایی پارامترهای حساس، قبل از واسنجی ضروری است. براساس این مطالعه پارامترهایی مانند CN2 (شماره منحنی)، GWQMN (حداقل عمق مورد نیاز سطح ایستایی در سفرههای کم عمق)، RCHRG_ DP (درصد تغذیه سفره عمیق از سفره کم عمق)، ALPHA_ BNK (پارامتر α در جریان پایه)، ESCO (فاکتور جبران کننده تبخیر از خاک) وSOL_ K (هدایت هیدرولیکی اشباع لایههای خاک) بهعنوان حساسترین پارامترها تعیین شدند. با توجه به نتایج این مطالعه، پارامتر CN2، مؤثرترین پارامتر در دبی خروجی از منطقه مورد مطالعه میباشد و شماره منحنی به عنوان منبع اصلی عدم قطعیت در نتایج مشخص شد. نتایج این مطالعه با استفاده از شاخصهای آماری ضریب تعیین و ضریب ناش – ساتکلیف در خروجی ایستگاه هیدرومتری قزاقلی برای دوره واسنجی بهترتیب 80/0 و 72/0 و برای دوره اعتبارسنجی به ترتیب 83/0و 73/0 می باشد. از طرف دیگر روش GLUE به خوبی توانسته رواناب را در طول دوره مورد مطالعه واسنجی کند، به طوری که بین 69 تا 74 درصد از دادههای مشاهداتی بهترتیب در دوره واسنجی و صحت سنجی در محدودهی اطمینان 95 درصد قرار گرفتند. نتیجهگیری: نتایج آنالیز عدم قطعیت بیانگر عدم قطعیت زیاد مدل در دوره واسنجی بود اگر چه نتایج شبیهسازی رواناب قابل قبول بود و 69 درصد دادههای مشاهداتی در محدودهی اطمینان 95 درصد قرار گرفتند. نتایج کلی نشان داد مدلSWAT در تحقیق حاضر، عملکرد قابل قبولی برای برآورد رواناب داشته و میتوان از آن برای ارزیابی هیدرولوژیکی حوزه گرگانرود استفاده کرد. این مطالعه اطلاعات مفیدی برای مدلسازی هیدرلوژیکی مربوط به سیاستگذاریهای توسعهای در حوزه رودخانه گرگانرود و مناطق مشابه ارایه میکند.) <|cite_end|> <|cite_start|> (Reference: SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems: In the last year, new models and methods for pretraining and transfer learning have driven striking performance improvements across a range of language understanding tasks. The GLUE benchmark, introduced a little over one year ago, offers a single-number metric that summarizes progress on a diverse set of such tasks, but performance on the benchmark has recently surpassed the level of non-expert humans, suggesting limited headroom for further research. In this paper we present SuperGLUE, a new benchmark styled after GLUE with a new set of more difficult language understanding tasks, a software toolkit, and a public leaderboard. SuperGLUE is available at super.gluebenchmark.com.) <|cite_end|> <|cite_start|> (Reference: {S{Q: 灰色降深流量模型,较好地解决了Rorabaugh方程中诸参数的确定问题,对于GM(m,n)模型也是一个发展.在利用模型讨论白云岩含水层中井的扩大引起的降深变化时发现,参数的变化与井径的变化相当。) <|cite_end|> <|cite_start|> (Reference: {{SWAG: The purpose of this study is to delve into the identity of the swag style which has diversified into various forms by exploring the phenomenon, formative characteristics and the internal values of the swag style in modern fashioin. This study discusses the concept and the socio-cultural meanings of swag from the perspective of Jean Baudrillard"s hyper-reality, and a form of existence. The classifies the swag fashion styles into parody, hip hop and collage-type mix-and-match. Expressive characteristics of the swag look in modern fashion are as follows. First, the swag look utilizes the parody technique. In the mid-2000s, the look parodied brand logos as a form of self-mocking and active self-derision toward cheap imitations. Second, the swag look borrows from the expressive factors of the hip-hop style. Born as a sub-culture based on music, hip-hop has become a way of life, as its nature became multi-cultural and trans-cultural while its fashion style gained popularity globally after the 1980s. Third, the swag look barrows from the pop-type collage form as it mixes-and-matches costume items based on the expressive characteristics of hip hop, and this can be seen through items being used in new, non-formative and free styles. Comic aesthetics is revealed in parodied expression, hip-hop factors and collage-style mix-and-match. Swag as a hyper-reality manifests itself in various natures: humorous nature, negative nature and deconstructive nature through reflection and re-enactment of reality, transmutation and distortion of reality, and absence of reality respectively. However, it does not have a binding nature, which is the norm for subcultures. This characteristic, in combination with it having internal lightness, strong meaning of communication, and a sharing of self-contentment, distinguishes itself from the general meanings of existing parody fashion, hip-hop fashion and collage fashion.) <|cite_end|> that many of them are no longer challenging or discriminative enough to differentiate strong models such as those based on Transformers <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|>.\footnote{For example, the recent DeBERTa model <|cite_start|> (Reference: DeBERTa: Decoding-enhanced BERT with Disentangled Attention: Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions, respectively. Second, an enhanced mask decoder is used to incorporate absolute positions in the decoding layer to predict the masked tokens in model pre-training. In addition, a new virtual adversarial training method is used for fine-tuning to improve models' generalization. We show that these techniques significantly improve the efficiency of model pre-training and the performance of both natural language understanding (NLU) and natural langauge generation (NLG) downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). Notably, we scale up DeBERTa by training a larger version that consists of 48 Transform layers with 1.5 billion parameters. The significant performance boost makes the single DeBERTa model surpass the human performance on the SuperGLUE benchmark (Wang et al., 2019a) for the first time in terms of macro-average score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the SuperGLUE leaderboard as of January 6, 2021, out performing the human baseline by a decent margin (90.3 versus 89.8).) <|cite_end|> achieves parity with human annotators on the SuperGLUE benchmark score: \url{https://super.gluebenchmark.com/leaderboard}.}
Even if these benchmarks are sound tests of important (and potentially unsolved) tasks, their usefulness is limited if they cannot measure further progress. In this paper, we ask: Which datasets are best in distinguishing current and possible future strong models?
We aim to compare datasets using a single metric that accounts for their effectiveness in separating current stronger and weaker models. To that end, we use Item Response Theory \citep[IRT;][]{Baker1993ItemRT}, a statistical framework from psychometrics that is widely used for the evaluation of test items in educational assessment. IRT assumes that the probability that a model will correctly handle an example in a test set depends on the model's latent ability parameter and three example-specific parameters, typically measuring example difficulty (how strong does a model have to be to get it right), discrimination (how effective the example is for differentiating between similar models), and guessing (how likely a weak model is to get the example right for spurious reasons).
This paper presents a large-scale IRT analysis of existing English NLU datasets. Unlike previous work which focuses on example-level analysis \textit{within} individual datasets <|cite_start|> (Reference: Building an Evaluation Scale using Item Response Theory: Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.) <|cite_end|> <|cite_start|> (Reference: Understanding Deep Learning Performance through an Examination of Test Set Difficulty: A Psychometric Case Study: Interpreting the performance of deep learning models beyond test set accuracy is challenging. Characteristics of individual data points are often not considered during evaluation, and each data point is treated equally. We examine the impact of a test set question's difficulty to determine if there is a relationship between difficulty and performance. We model difficulty using well-studied psychometric methods on human response patterns. Experiments on Natural Language Inference (NLI) and Sentiment Analysis (SA) show that the likelihood of answering a question correctly is impacted by the question's difficulty. As DNNs are trained with more data, easy examples are learned more quickly than hard examples.) <|cite_end|>, here we analyze example characteristics from a larger perspective by comparing individual examples \textit{across} datasets. We evaluate test sets from 29 datasets in different formats---classification, multiple-choice QA, and span-selection QA.
As responses, we use model predictions from 18 Transformer-based models, including some limited-capacity models chosen to expose better the dataset's ability to discriminate weaker from stronger predictors.
We then fit a single IRT model on these responses using a variational inference method.\footnote{
Our data and code can be found at \url{https://github.com/nyu-mll/nlu-test-sets}.}
\newpage
We find:
\begin{itemize}
\item Quoref, HellaSwag, and MC-TACO contain the highest number of examples that can differentiate between near-state-of-the-art models, making them very likely to be effective at tracking near-future progress on the skills that they actually test (Figure~\ref{fig:irt-score-box}).
\item SQuAD2.0, NewsQA, QuAIL, MC-TACO, and ARC-Challenge have the most difficult examples.
\item Span-based QA is an effective task format for discriminating between strong and weak models.
\item CosmosQA, MC-TACO, Winogrande, and ARC-Challenge consist mostly of hard examples, while for most datasets, the example difficulty levels are more widely distributed.
\end{itemize}
Related Work
Prior work on using IRT to evaluate NLP systems mostly relies on human responses. <|cite_start|> (Reference: Models of translation competitions: What do we want to learn from a translation competition and how do we learn it with confidence? We argue that a disproportionate focus on ranking competition participants has led to lots of different rankings, but little insight about which rankings we should trust. In response, we provide the first framework that allows an empirical comparison of different analyses of competition results. We then use this framework to compare several analytical models on data from the Workshop on Machine Translation (WMT).) <|cite_end|> use IRT to estimate the relative ability of a set of machine translation systems using responses from pairwise comparison of system outputs by human judges. <|cite_start|> (Reference: {IRT: 이 연구에서는 문항반응이론을 적용하여 수직척도를 개발할 때 자료수집방법과 추정방법에 따라 산출되는 수직척도에서의 성장패턴과 척도변동성을 탐색하였다. 척도검사설계 방법과 공통문항설계방법을 반영한 자료를 모의실험을 통해 생성하였으며 각 자료수집방법별 동시추정방법과 분리추정방법을 적용하여 수직척도를 개발하고 각 방법에서 성장패턴과 척도변동성 추정치의 결과를...) <|cite_end|> extend this work by including a baseline translation to the pairwise comparison. <|cite_start|> (Reference: Building an Evaluation Scale using Item Response Theory: Evaluation of NLP methods requires testing against a previously vetted gold-standard test set and reporting standard metrics (accuracy/precision/recall/F1). The current assumption is that all items in a given test set are equal with regards to difficulty and discriminating power. We propose Item Response Theory (IRT) from psychometrics as an alternative means for gold-standard test-set generation and NLP system evaluation. IRT is able to describe characteristics of individual items - their difficulty and discriminating power - and can account for these characteristics in its estimation of human intelligence or ability for an NLP task. In this paper, we demonstrate IRT by generating a gold-standard test set for Recognizing Textual Entailment. By collecting a large number of human responses and fitting our IRT model, we show that our IRT model compares NLP systems with the performance in a human population and is able to provide more insight into system performance than standard evaluation metrics. We show that a high accuracy score does not always imply a high IRT score, which depends on the item characteristics and the response pattern.) <|cite_end|> <|cite_start|> (Reference: Understanding Deep Learning Performance through an Examination of Test Set Difficulty: A Psychometric Case Study: Interpreting the performance of deep learning models beyond test set accuracy is challenging. Characteristics of individual data points are often not considered during evaluation, and each data point is treated equally. We examine the impact of a test set question's difficulty to determine if there is a relationship between difficulty and performance. We model difficulty using well-studied psychometric methods on human response patterns. Experiments on Natural Language Inference (NLI) and Sentiment Analysis (SA) show that the likelihood of answering a question correctly is impacted by the question's difficulty. As DNNs are trained with more data, easy examples are learned more quickly than hard examples.) <|cite_end|> use IRT to identify hard examples in natural language inference data based on human responses. In a follow-up study, <|cite_start|> (Reference: Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial Crowds: Incorporating Item Response Theory (IRT) into NLP tasks can provide valuable information about model performance and behavior. Traditionally, IRT models are learned using human response pattern (RP) data, presenting a significant bottleneck for large data sets like those required for training deep neural networks (DNNs). In this work we propose learning IRT models using RPs generated from artificial crowds of DNN models. We demonstrate the effectiveness of learning IRT models using DNN-generated data through quantitative and qualitative analyses for two NLP tasks. Parameters learned from human and machine RPs for natural language inference and sentiment analysis exhibit medium to large positive correlations. We demonstrate a use-case for latent difficulty item parameters, namely training set filtering, and show that using difficulty to sample training data outperforms baseline methods. Finally, we highlight cases where human expectation about item difficulty does not match difficulty as estimated from the machine RPs.) <|cite_end|> compare human versus model responses and find that both are positively correlated and demonstrate the use cases of IRT parameters in training set filtering. <|cite_start|> (Reference: Item response theory for efficient human evaluation of chatbots: Conversational agent quality is currently assessed using human evaluation, and often requires an exorbitant number of comparisons to achieve statistical significance. In this paper, we introduce Item Response Theory (IRT) for chatbot evaluation, using a paired comparison in which annotators judge which system responds better to the next turn of a conversation. IRT is widely used in educational testing for simultaneously assessing the ability of test takers and the quality of test questions. It is similarly well suited for chatbot evaluation since it allows the assessment of both models and the prompts used to evaluate them. We use IRT to efficiently assess chatbots, and show that different examples from the evaluation set are better suited for comparing high-quality (nearer to human performance) than low-quality systems. Finally, we use IRT to reduce the number of evaluation examples assessed by human annotators while retaining discriminative power.) <|cite_end|> use IRT to evaluate chatbot systems.
The work by <|cite_start|> (Reference: Item response theory in AI: Analysing machine learning classifiers at the instance level: ) <|cite_end|> is the first to study the idea of using model responses (as opposed to human responses) for IRT in machine learning research.
For NLU, <|cite_start|> (Reference: Dynamic Data Selection for Curriculum Learning via Ability Estimation: Curriculum learning methods typically rely on heuristics to estimate the difficulty of training examples or the ability of the model. In this work, we propose replacing difficulty heuristics with learned difficulty parameters. We also propose Dynamic Data selection for Curriculum Learning via Ability Estimation (DDaCLAE), a strategy that probes model ability at each training epoch to select the best training examples at that point. We show that models using learned difficulty and/or ability outperform heuristic-based curriculum learning models on the GLUE classification tasks.) <|cite_end|> use model responses to estimate difficulty parameters of several GLUE datasets for dynamic data selection in curriculum learning. In concurrent work, <|cite_start|> (Reference: Evaluation Examples are not Equally Informative: How should that change NLP Leaderboards?: Accessible Abstract: When can we call an AI ”intelligent”? Just like humans, a common approach is to ask them a bunch of questions. These questions posed to modern machine learning methods are collected in metrics called leaderboards to monitor progress) <|cite_end|> study how IRT can be used for more nuanced leaderboard evaluations. Their experiments demonstrate that IRT can produce a more reliable ranking of models than the traditional metrics. They also show that IRT is not only useful for better understanding of individual examples in the dataset and task, but also effective in identifying annotation errors.
For other dataset evaluations, in addition to providing a benchmark, the SuperGLUE paper also compares a set of candidate datasets using a fixed pool of machine learning models and human annotators <|cite_start|> (Reference: Human vs. Muppet: A Conservative Estimate of Human Performance on the GLUE Benchmark: The GLUE benchmark (Wang et al., 2019b) is a suite of language understanding tasks which has seen dramatic progress in the past year, with average performance moving from 70.0 at launch to 83.9, state of the art at the time of writing (May 24, 2019). Here, we measure human performance on the benchmark, in order to learn whether significant headroom remains for further progress. We provide a conservative estimate of human performance on the benchmark through crowdsourcing: Our annotators are non-experts who must learn each task from a brief set of instructions and 20 examples. In spite of limited training, these annotators robustly outperform the state of the art on six of the nine GLUE tasks and achieve an average score of 87.1. Given the fast pace of progress however, the headroom we observe is quite limited. To reproduce the data-poor setting that our annotators must learn in, we also train the BERT model (Devlin et al., 2019) in limited-data regimes, and conclude that low-resource sentence classification remains a challenge for modern neural network approaches to text understanding.) <|cite_end|>. <|cite_start|> (Reference: Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling: Natural language understanding has recently seen a surge of progress with the use of sentence encoders like ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) which are pretrained on variants of language modeling. We conduct the first large-scale systematic study of candidate pretraining tasks, comparing 19 different tasks both as alternatives and complements to language modeling. Our primary results support the use language modeling, especially when combined with pretraining on additional labeled-data tasks. However, our results are mixed across pretraining tasks and show some concerning trends: In ELMo's pretrain-then-freeze paradigm, random baselines are worryingly strong and results vary strikingly across target tasks. In addition, fine-tuning BERT on an intermediate task often negatively impacts downstream transfer. In a more positive trend, we see modest gains from multitask training, suggesting the development of more sophisticated multitask and transfer learning techniques as an avenue for further research.) <|cite_end|> investigate pretraining tasks and paradigms for effective transfer learning methods. <|cite_start|> (Reference: Intermediate-task transfer learning with pretrained language models: When and why does it work?: While pretrained models such as BERT have shown large gains across natural language understanding tasks, their performance can be improved by further training the model on a data-rich intermediate task, before fine-tuning it on a target task. However, it is still poorly understood when and why intermediate-task training is beneficial for a given target task. To investigate this, we perform a large-scale study on the pretrained RoBERTa model with 110 intermediate-target task combinations. We further evaluate all trained models with 25 probing tasks meant to reveal the specific skills that drive transfer. We observe that intermediate tasks requiring high-level inference and reasoning abilities tend to work best. We also observe that target task performance is strongly correlated with higher-level abilities such as coreference resolution. However, we fail to observe more granular correlations between probing and target task performance, highlighting the need for further work on broad-coverage probing benchmarks. We also observe evidence that the forgetting of knowledge learned during pretraining may limit our analysis, highlighting the need for further work on transfer learning methods in these settings.) <|cite_end|> study when and why intermediate-task training is useful for a given target task. <|cite_start|> (Reference: Exploring and Predicting Transferability across NLP Tasks: Recent advances in NLP demonstrate the effectiveness of training large-scale language models and transferring them to downstream tasks. Can fine-tuning these models on tasks other than language modeling further improve performance? In this paper, we conduct an extensive study of the transferability between 33 NLP tasks across three broad classes of problems (text classification, question answering, and sequence labeling). Our results show that transfer learning is more beneficial than previously thought, especially when target task data is scarce, and can improve performance even when the source task is small or differs substantially from the target task (e.g., part-of-speech tagging transfers well to the DROP QA dataset). We also develop task embeddings that can be used to predict the most transferable source tasks for a given target task, and we validate their effectiveness in experiments controlled for source and target data size. Overall, our experiments reveal that factors such as source data size, task and domain similarity, and task complexity all play a role in determining transferability.) <|cite_end|> introduce task embeddings to predict the most beneficial source task for a given target task. <|cite_start|> (Reference: A Framework for Evaluation of Machine Reading Comprehension Gold Standards: Machine Reading Comprehension (MRC) is the task of answering a question over a paragraph of text. While neural MRC systems gain popularity and achieve noticeable performance, issues are being raised with the methodology used to establish their performance, particularly concerning the data design of gold standards that are used to evaluate them. There is but a limited understanding of the challenges present in this data, which makes it hard to draw comparisons and formulate reliable hypotheses. As a first step towards alleviating the problem, this paper proposes a unifying framework to systematically investigate the present linguistic features, required reasoning and background knowledge and factual correctness on one hand, and the presence of lexical cues as a lower bound for the requirement of understanding on the other hand. We propose a qualitative annotation schema for the first and a set of approximative metrics for the latter. In a first application of the framework, we analyse modern MRC gold standards and present our findings: the absence of features that contribute towards lexical ambiguity, the varying factual correctness of the expected answers and the presence of lexical cues, all of which potentially lower the reading comprehension complexity and quality of the evaluation data.) <|cite_end|> propose an evaluation framework for machine reading comprehension (MRC) datasets and reveal some concerns regarding factual correctness and the presence of linguistic cues in existing MRC gold datasets. <|paper_end|> | [
"<|reference_start|> {{SWAG: The purpose of this study is to delve into the identity of the swag style which has diversified into various forms by exploring the phenomenon, formative characteristics and the internal values of the swag style in modern fashioin. This study discusses the concept and the socio-cultural meanings of swag from the perspective of Jean Baudrillard\"s hyper-reality, and a form of existence. The classifies the swag fashion styles into parody, hip hop and collage-type mix-and-match. Expressive characteristics of the swag look in modern fashion are as follows. First, the swag look utilizes the parody technique. In the mid-2000s, the look parodied brand logos as a form of self-mocking and active self-derision toward cheap imitations. Second, the swag look borrows from the expressive factors of the hip-hop style. Born as a sub-culture based on music, hip-hop has become a way of life, as its nature became multi-cultural and trans-cultural while its fashion style gained popularity globally after the 1980s. Third, the swag look barrows from the pop-type collage form as it mixes-and-matches costume items based on the expressive characteristics of hip hop, and this can be seen through items being used in new, non-formative and free styles. Comic aesthetics is revealed in parodied expression, hip-hop factors and collage-style mix-and-match. Swag as a hyper-reality manifests itself in various natures: humorous nature, negative nature and deconstructive nature through reflection and re-enactment of reality, transmutation and distortion of reality, and absence of reality respectively. However, it does not have a binding nature, which is the norm for subcultures. This characteristic, in combination with it having internal lightness, strong meaning of communication, and a sharing of self-contentment, distinguishes itself from the general meanings of existing parody fashion, hip-hop fashion and collage fashion. <|reference_end|>",
"<|reference_start|> Understanding Deep Learning Performance through an Examination of Test Set Difficulty: A Psychometric Case Study: Interpreting the performance of deep learning models beyond test set accuracy is challenging. Characteristics of individual data points are often not considered during evaluation, and each data point is treated equally. We examine the impact of a test set question's difficulty to determine if there is a relationship between difficulty and performance. We model difficulty using well-studied psychometric methods on human response patterns. Experiments on Natural Language Inference (NLI) and Sentiment Analysis (SA) show that the likelihood of answering a question correctly is impacted by the question's difficulty. As DNNs are trained with more data, easy examples are learned more quickly than hard examples. <|reference_end|>",
"<|reference_start|> Learning Latent Parameters without Human Response Patterns: Item Response Theory with Artificial Crowds: Incorporating Item Response Theory (IRT) into NLP tasks can provide valuable information about model performance and behavior. Traditionally, IRT models are learned using human response pattern (RP) data, presenting a significant bottleneck for large data sets like those required for training deep neural networks (DNNs). In this work we propose learning IRT models using RPs generated from artificial crowds of DNN models. We demonstrate the effectiveness of learning IRT models using DNN-generated data through quantitative and qualitative analyses for two NLP tasks. Parameters learned from human and machine RPs for natural language inference and sentiment analysis exhibit medium to large positive correlations. We demonstrate a use-case for latent difficulty item parameters, namely training set filtering, and show that using difficulty to sample training data outperforms baseline methods. Finally, we highlight cases where human expectation about item difficulty does not match difficulty as estimated from the machine RPs. <|reference_end|>",
"<|reference_start|> Can You Tell Me How to Get Past Sesame Street? Sentence-Level Pretraining Beyond Language Modeling: Natural language understanding has recently seen a surge of progress with the use of sentence encoders like ELMo (Peters et al., 2018a) and BERT (Devlin et al., 2019) which are pretrained on variants of language modeling. We conduct the first large-scale systematic study of candidate pretraining tasks, comparing 19 different tasks both as alternatives and complements to language modeling. Our primary results support the use language modeling, especially when combined with pretraining on additional labeled-data tasks. However, our results are mixed across pretraining tasks and show some concerning trends: In ELMo's pretrain-then-freeze paradigm, random baselines are worryingly strong and results vary strikingly across target tasks. In addition, fine-tuning BERT on an intermediate task often negatively impacts downstream transfer. In a more positive trend, we see modest gains from multitask training, suggesting the development of more sophisticated multitask and transfer learning techniques as an avenue for further research. <|reference_end|>"
] | [
3,
11,
12,
18
] | {"<|multi_cite_1_1|>": "ss-1512504", "<|multi_cite_1_2|>": "arxiv-202382", "<|multi_cite_1_3|>": "ss-946675", "<|multi_cite_1_4|>": "ss-1518595", "<|cite_2|>": "ss-832115", "<|cite_3|>": "arxiv-269801", "<|multi_cite_4_1|>": "arxiv-98867", "<|multi_cite_4_2|>": "arxiv-116667", "<|cite_6|>": "ss-726744", "<|cite_7|>": "ss-1837077", "<|multi_cite_8_1|>": "arxiv-98867", "<|multi_cite_8_2|>": "arxiv-116667", "<|cite_9|>": "arxiv-221027", "<|cite_10|>": "ss-1471265", "<|cite_11|>": "ss-1616690", "<|cite_12|>": "arxiv-300467", "<|cite_13|>": "ss-1485574", "<|cite_5|>": "ss-918397", "<|cite_14|>": "arxiv-185889", "<|cite_15|>": "ss-685967", "<|cite_16|>": "ss-1527112", "<|cite_17|>": "arxiv-252945"} |
2301.04604-1 | <|cite_start|> (Reference: Low-Rank Subspaces in GANs: The latent space of a Generative Adversarial Network (GAN) has been shown to encode rich semantics within some subspaces. To identify these subspaces, researchers typically analyze the statistical information from a collection of synthesized data, and the identified subspaces tend to control image attributes globally (i.e., manipulating an attribute causes the change of an entire image). By contrast, this work introduces low-rank subspaces that enable more precise control of GAN generation. Concretely, given an arbitrary image and a region of interest (e.g., eyes of face images), we manage to relate the latent space to the image region with the Jacobian matrix and then use low-rank factorization to discover steerable latent subspaces. There are three distinguishable strengths of our approach that can be aptly called LowRankGAN. First, compared to analytic algorithms in prior work, our low-rank factorization of Jacobians is able to find the low-dimensional representation of attribute manifold, making image editing more precise and controllable. Second, low-rank factorization naturally yields a null space of attributes such that moving the latent code within it only affects the outer region of interest. Therefore, local image editing can be simply achieved by projecting an attribute vector into the null space without relying on a spatial mask as existing methods do. Third, our method can robustly work with a local region from one image for analysis yet well generalize to other images, making it much easy to use in practice. Extensive experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.) <|cite_end|> <|cite_start|> (Reference: Region-Based Semantic Factorization in GANs: Despite the rapid advancement of semantic discovery in the latent space of Generative Adversarial Networks (GANs), existing approaches either are limited to finding global attributes or rely on a number of segmentation masks to identify local attributes. In this work, we present a highly efficient algorithm to factorize the latent semantics learned by GANs concerning an arbitrary image region. Concretely, we revisit the task of local manipulation with pre-trained GANs and formulate region-based semantic discovery as a dual optimization problem. Through an appropriately defined generalized Rayleigh quotient, we manage to solve such a problem without any annotations or training. Experimental results on various state-of-the-art GAN models demonstrate the effectiveness of our approach, as well as its superiority over prior arts regarding precise control, region robustness, speed of implementation, and simplicity of use.) <|cite_end|>.
However, all of these works try to interpret the pre-trained GANs, but few of them apply those findings during GAN training.
\noindent\textbf{Regularizers for GAN training.}
Many attempts have been made to regularize GANs during training <|cite_start|> (Reference: Improved Training of Wasserstein GANs: Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.) <|cite_end|> <|cite_start|> (Reference: Which Training Methods for GANs do actually Converge?: Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss regularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero-centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distribution lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high-resolution generative image models for a variety of datasets with little hyperparameter tuning.) <|cite_end|> <|cite_start|> (Reference: Analyzing and Improving the Image Quality of StyleGAN: The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.) <|cite_end|> <|cite_start|> (Reference: {{GAN: مركبات الجاليوم نترايد ( GaN ) هى إحدى فصائل أشباه الموصلات تعتبر من أهم الوصلات الالكترونية وذلك لتميزها التطبيقي وخاصة فى المجالات الكهروضوئية ( الليزر ) والمجالات التطبيقية الالكترونية الاخرى ( High power Devices ) بسبب سعة فجوة الطاقة حيث أن هذه السعة تتراوح ما بين 1.9 ev حتى 6.28 ونتيجة لذلك نستطيع الحصول على مصادر متعددة للطاقة ومثال على ذلك ثنائيات اﻹنبعاث الضوئي (LEDs ) ومصادر ثنائية أ خرى ( LDs ) ونظرا لأهمية هذا النوع من التقنية حاليا فان هناك توجها عالميا كبيرآ جدآ على مستوى مراكز الأبحاث سواء كان ذلك فى الشركات المتخصصة أو مراكزالأبحاث فى الجامعات والمعاهد لدراسة خصائص هذه المواد من النواحي الفيزيائية والتطبيقية سواء عن طريق التصنيع أو القياسات الكهربية والضوئية والمغنا طيسية وبناء على ماسبق فقد قمنا بدراسات عدة للدخول فى هذا المجال من أجل دراسة خصائص هذه المواد الكهربية وتم الحصول على العلاقة مابين التيار والجهد ودرجة الحرارة وكذلك دراسة السعة الكهربية لها وقد توصلنا الى نتائج جيدة جدا لهذه الوصلات الكهربية حيث اننا استخدمنا معدن الذهب وكذلك معدن الألمنيوم من أجل التوصيلات الكهربية وتمكنا بعد ذلك من الحصول على نتائج مطابقة بشكل كبير لماهو موجود نظريا كما هو موضح فى الجزء الخاص بالنتائج العملية فى هذا البحث .) <|cite_end|> <|cite_start|> (Reference: Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation: Unsupervised disentanglement learning is a crucial issue for understanding and exploiting deep generative models. Recently, SeFa tries to find latent disentangled directions by performing SVD on the first projection of a pre-trained GAN. However, it is only applied to the first layer and works in a post-processing way. Hessian Penalty minimizes the off-diagonal entries of the output's Hessian matrix to facilitate disentanglement, and can be applied to multi-layers.However, it constrains each entry of output independently, making it not sufficient in disentangling the latent directions (e.g., shape, size, rotation, etc.) of spatially correlated variations. In this paper, we propose a simple Orthogonal Jacobian Regularization (OroJaR) to encourage deep generative model to learn disentangled representations. It simply encourages the variation of output caused by perturbations on different latent dimensions to be orthogonal, and the Jacobian with respect to the input is calculated to represent this variation. We show that our OroJaR also encourages the output's Hessian matrix to be diagonal in an indirect manner. In contrast to the Hessian Penalty, our OroJaR constrains the output in a holistic way, making it very effective in disentangling latent dimensions corresponding to spatially correlated variations. Quantitative and qualitative experimental results show that our method is effective in disentangled and controllable image generation, and performs favorably against the state-of-the-art methods. Our code is available at https://github.com/csyxwei/OroJaR) <|cite_end|> <|cite_start|> (Reference: The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement: Existing disentanglement methods for deep generative models rely on hand-picked priors and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a simple regularization term that encourages the Hessian of a generative model with respect to its input to be diagonal. We introduce a model-agnostic, unbiased stochastic approximation of this term based on Hutchinson's estimator to compute it efficiently during training. Our method can be applied to a wide range of deep generators with just a few lines of code. We show that training with the Hessian Penalty often causes axis-aligned disentanglement to emerge in latent space when applied to ProGAN on several datasets. Additionally, we use our regularization term to identify interpretable directions in BigGAN's latent space in an unsupervised fashion. Finally, we provide empirical evidence that the Hessian Penalty encourages substantial shrinkage when applied to over-parameterized latent spaces.) <|cite_end|> <|cite_start|> (Reference: Improving GANs with A Dynamic Discriminator: Discriminator plays a vital role in training generative adversarial networks (GANs) via distinguishing real and synthesized samples. While the real data distribution remains the same, the synthesis distribution keeps varying because of the evolving generator, and thus effects a corresponding change to the bi-classification task for the discriminator. We argue that a discriminator with an on-the-fly adjustment on its capacity can better accommodate such a time-varying task. A comprehensive empirical study confirms that the proposed training strategy, termed as DynamicD, improves the synthesis performance without incurring any additional computation cost or training objectives. Two capacity adjusting schemes are developed for training GANs under different data regimes: i) given a sufficient amount of training data, the discriminator benefits from a progressively increased learning capacity, and ii) when the training data is limited, gradually decreasing the layer width mitigates the over-fitting issue of the discriminator. Experiments on both 2D and 3D-aware image synthesis tasks conducted on a range of datasets substantiate the generalizability of our DynamicD as well as its substantial improvement over the baselines. Furthermore, DynamicD is synergistic to other discriminator-improving approaches (including data augmentation, regularizers, and pre-training), and brings continuous performance gain when combined for learning GANs.) <|cite_end|>.
Some of them try to improve the training stability of GANs by regularizing the gradients of the discriminator <|cite_start|> (Reference: Improved Training of Wasserstein GANs: Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes progress toward stable training of GANs, but sometimes can still generate only low-quality samples or fail to converge. We find that these problems are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to undesired behavior. We propose an alternative to clipping weights: penalize the norm of gradient of the critic with respect to its input. Our proposed method performs better than standard WGAN and enables stable training of a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data. We also achieve high quality generations on CIFAR-10 and LSUN bedrooms.) <|cite_end|> <|cite_start|> (Reference: Which Training Methods for GANs do actually Converge?: Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not absolutely continuous, unregularized GAN training is not always convergent. Furthermore, we discuss regularization strategies that were recently proposed to stabilize GAN training. Our analysis shows that GAN training with instance noise or zero-centered gradient penalties converges. On the other hand, we show that Wasserstein-GANs and WGAN-GP with a finite number of discriminator updates per generator update do not always converge to the equilibrium point. We discuss these results, leading us to a new explanation for the stability problems of GAN training. Based on our analysis, we extend our convergence results to more general GANs and prove local convergence for simplified gradient penalties even if the generator and data distribution lie on lower dimensional manifolds. We find these penalties to work well in practice and use them to learn high-resolution generative image models for a variety of datasets with little hyperparameter tuning.) <|cite_end|>, the spectral norm of each layer <|cite_start|> (Reference: Spectral Normalization for Generative Adversarial Networks: One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to incorporate into existing implementations. We tested the efficacy of spectral normalization on CIFAR10, STL-10, and ILSVRC2012 dataset, and we experimentally confirmed that spectrally normalized GANs (SN-GANs) is capable of generating images of better or equal quality relative to the previous training stabilization techniques.) <|cite_end|>, or the singular values of the generator <|cite_start|> (Reference: Is Generator Conditioning Causally Related to GAN Performance?: Recent work (Pennington et al, 2017) suggests that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning. Motivated by this, we study the distribution of singular values of the Jacobian of the generator in Generative Adversarial Networks (GANs). We find that this Jacobian generally becomes ill-conditioned at the beginning of training. Moreover, we find that the average (with z from p(z)) conditioning of the generator is highly predictive of two other ad-hoc metrics for measuring the 'quality' of trained GANs: the Inception Score and the Frechet Inception Distance (FID). We test the hypothesis that this relationship is causal by proposing a 'regularization' technique (called Jacobian Clamping) that softly penalizes the condition number of the generator Jacobian. Jacobian Clamping improves the mean Inception Score and the mean FID for GANs trained on several datasets. It also greatly reduces inter-run variance of the aforementioned scores, addressing (at least partially) one of the main criticisms of GANs.) <|cite_end|>.
Besides, some of them <|cite_start|> (Reference: Semantically Decomposing the Latent Spaces of Generative Adversarial Networks: We propose a new algorithm for training generative adversarial networks that jointly learns latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). By fixing the identity portion of the latent codes, we can generate diverse images of the same subject, and by fixing the observation portion, we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample from the generator consists of two images with a common identity code. Corresponding samples from the real dataset consist of two distinct photographs of the same subject. In order to fool the discriminator, the generator must produce pairs that are photorealistic, distinct, and appear to depict the same individual. We augment both the DCGAN and BEGAN approaches with Siamese discriminators to facilitate pairwise training. Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm's ability to generate convincing, identity-matched photographs.) <|cite_end|> <|cite_start|> (Reference: The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement: Existing disentanglement methods for deep generative models rely on hand-picked priors and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a simple regularization term that encourages the Hessian of a generative model with respect to its input to be diagonal. We introduce a model-agnostic, unbiased stochastic approximation of this term based on Hutchinson's estimator to compute it efficiently during training. Our method can be applied to a wide range of deep generators with just a few lines of code. We show that training with the Hessian Penalty often causes axis-aligned disentanglement to emerge in latent space when applied to ProGAN on several datasets. Additionally, we use our regularization term to identify interpretable directions in BigGAN's latent space in an unsupervised fashion. Finally, we provide empirical evidence that the Hessian Penalty encourages substantial shrinkage when applied to over-parameterized latent spaces.) <|cite_end|> <|cite_start|> (Reference: Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation: Unsupervised disentanglement learning is a crucial issue for understanding and exploiting deep generative models. Recently, SeFa tries to find latent disentangled directions by performing SVD on the first projection of a pre-trained GAN. However, it is only applied to the first layer and works in a post-processing way. Hessian Penalty minimizes the off-diagonal entries of the output's Hessian matrix to facilitate disentanglement, and can be applied to multi-layers.However, it constrains each entry of output independently, making it not sufficient in disentangling the latent directions (e.g., shape, size, rotation, etc.) of spatially correlated variations. In this paper, we propose a simple Orthogonal Jacobian Regularization (OroJaR) to encourage deep generative model to learn disentangled representations. It simply encourages the variation of output caused by perturbations on different latent dimensions to be orthogonal, and the Jacobian with respect to the input is calculated to represent this variation. We show that our OroJaR also encourages the output's Hessian matrix to be diagonal in an indirect manner. In contrast to the Hessian Penalty, our OroJaR constrains the output in a holistic way, making it very effective in disentangling latent dimensions corresponding to spatially correlated variations. Quantitative and qualitative experimental results show that our method is effective in disentangled and controllable image generation, and performs favorably against the state-of-the-art methods. Our code is available at https://github.com/csyxwei/OroJaR) <|cite_end|> <|cite_start|> (Reference: {{GAN: مركبات الجاليوم نترايد ( GaN ) هى إحدى فصائل أشباه الموصلات تعتبر من أهم الوصلات الالكترونية وذلك لتميزها التطبيقي وخاصة فى المجالات الكهروضوئية ( الليزر ) والمجالات التطبيقية الالكترونية الاخرى ( High power Devices ) بسبب سعة فجوة الطاقة حيث أن هذه السعة تتراوح ما بين 1.9 ev حتى 6.28 ونتيجة لذلك نستطيع الحصول على مصادر متعددة للطاقة ومثال على ذلك ثنائيات اﻹنبعاث الضوئي (LEDs ) ومصادر ثنائية أ خرى ( LDs ) ونظرا لأهمية هذا النوع من التقنية حاليا فان هناك توجها عالميا كبيرآ جدآ على مستوى مراكز الأبحاث سواء كان ذلك فى الشركات المتخصصة أو مراكزالأبحاث فى الجامعات والمعاهد لدراسة خصائص هذه المواد من النواحي الفيزيائية والتطبيقية سواء عن طريق التصنيع أو القياسات الكهربية والضوئية والمغنا طيسية وبناء على ماسبق فقد قمنا بدراسات عدة للدخول فى هذا المجال من أجل دراسة خصائص هذه المواد الكهربية وتم الحصول على العلاقة مابين التيار والجهد ودرجة الحرارة وكذلك دراسة السعة الكهربية لها وقد توصلنا الى نتائج جيدة جدا لهذه الوصلات الكهربية حيث اننا استخدمنا معدن الذهب وكذلك معدن الألمنيوم من أجل التوصيلات الكهربية وتمكنا بعد ذلك من الحصول على نتائج مطابقة بشكل كبير لماهو موجود نظريا كما هو موضح فى الجزء الخاص بالنتائج العملية فى هذا البحث .) <|cite_end|> <|cite_start|> (Reference: Analyzing and Improving the Image Quality of StyleGAN: The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.) <|cite_end|>aim to improve the disentanglement property of GANs.
For example, <|cite_start|> (Reference: The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement: Existing disentanglement methods for deep generative models rely on hand-picked priors and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a simple regularization term that encourages the Hessian of a generative model with respect to its input to be diagonal. We introduce a model-agnostic, unbiased stochastic approximation of this term based on Hutchinson's estimator to compute it efficiently during training. Our method can be applied to a wide range of deep generators with just a few lines of code. We show that training with the Hessian Penalty often causes axis-aligned disentanglement to emerge in latent space when applied to ProGAN on several datasets. Additionally, we use our regularization term to identify interpretable directions in BigGAN's latent space in an unsupervised fashion. Finally, we provide empirical evidence that the Hessian Penalty encourages substantial shrinkage when applied to over-parameterized latent spaces.) <|cite_end|> <|cite_start|> (Reference: Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation: Unsupervised disentanglement learning is a crucial issue for understanding and exploiting deep generative models. Recently, SeFa tries to find latent disentangled directions by performing SVD on the first projection of a pre-trained GAN. However, it is only applied to the first layer and works in a post-processing way. Hessian Penalty minimizes the off-diagonal entries of the output's Hessian matrix to facilitate disentanglement, and can be applied to multi-layers.However, it constrains each entry of output independently, making it not sufficient in disentangling the latent directions (e.g., shape, size, rotation, etc.) of spatially correlated variations. In this paper, we propose a simple Orthogonal Jacobian Regularization (OroJaR) to encourage deep generative model to learn disentangled representations. It simply encourages the variation of output caused by perturbations on different latent dimensions to be orthogonal, and the Jacobian with respect to the input is calculated to represent this variation. We show that our OroJaR also encourages the output's Hessian matrix to be diagonal in an indirect manner. In contrast to the Hessian Penalty, our OroJaR constrains the output in a holistic way, making it very effective in disentangling latent dimensions corresponding to spatially correlated variations. Quantitative and qualitative experimental results show that our method is effective in disentangled and controllable image generation, and performs favorably against the state-of-the-art methods. Our code is available at https://github.com/csyxwei/OroJaR) <|cite_end|>try to disentangle each component in the latent vectors so that each dimension in the latent codes can only affect one attribute on the output images by adding some regularizers (e.g., Hessian Penalty or Orthogonal Jacobian Regularization).
In StyleGAN2 <|cite_start|> (Reference: Analyzing and Improving the Image Quality of StyleGAN: The style-based GAN architecture (StyleGAN) yields state-of-the-art results in data-driven unconditional generative image modeling. We expose and analyze several of its characteristic artifacts, and propose changes in both model architecture and training methods to address them. In particular, we redesign the generator normalization, revisit progressive growing, and regularize the generator to encourage good conditioning in the mapping from latent codes to images. In addition to improving image quality, this path length regularizer yields the additional benefit that the generator becomes significantly easier to invert. This makes it possible to reliably attribute a generated image to a particular network. We furthermore visualize how well the generator utilizes its output resolution, and identify a capacity problem, motivating us to train larger models for additional quality improvements. Overall, our improved model redefines the state of the art in unconditional image modeling, both in terms of existing distribution quality metrics as well as perceived image quality.) <|cite_end|>, a path length regularization is added to encourage the learned generator to become much smoother.
And <|cite_start|> (Reference: Semantically Decomposing the Latent Spaces of Generative Adversarial Networks: We propose a new algorithm for training generative adversarial networks that jointly learns latent codes for both identities (e.g. individual humans) and observations (e.g. specific photographs). By fixing the identity portion of the latent codes, we can generate diverse images of the same subject, and by fixing the observation portion, we can traverse the manifold of subjects while maintaining contingent aspects such as lighting and pose. Our algorithm features a pairwise training scheme in which each sample from the generator consists of two images with a common identity code. Corresponding samples from the real dataset consist of two distinct photographs of the same subject. In order to fool the discriminator, the generator must produce pairs that are photorealistic, distinct, and appear to depict the same individual. We augment both the DCGAN and BEGAN approaches with Siamese discriminators to facilitate pairwise training. Experiments with human judges and an off-the-shelf face verification system demonstrate our algorithm's ability to generate convincing, identity-matched photographs.) <|cite_end|> <|cite_start|> (Reference: {{GAN: مركبات الجاليوم نترايد ( GaN ) هى إحدى فصائل أشباه الموصلات تعتبر من أهم الوصلات الالكترونية وذلك لتميزها التطبيقي وخاصة فى المجالات الكهروضوئية ( الليزر ) والمجالات التطبيقية الالكترونية الاخرى ( High power Devices ) بسبب سعة فجوة الطاقة حيث أن هذه السعة تتراوح ما بين 1.9 ev حتى 6.28 ونتيجة لذلك نستطيع الحصول على مصادر متعددة للطاقة ومثال على ذلك ثنائيات اﻹنبعاث الضوئي (LEDs ) ومصادر ثنائية أ خرى ( LDs ) ونظرا لأهمية هذا النوع من التقنية حاليا فان هناك توجها عالميا كبيرآ جدآ على مستوى مراكز الأبحاث سواء كان ذلك فى الشركات المتخصصة أو مراكزالأبحاث فى الجامعات والمعاهد لدراسة خصائص هذه المواد من النواحي الفيزيائية والتطبيقية سواء عن طريق التصنيع أو القياسات الكهربية والضوئية والمغنا طيسية وبناء على ماسبق فقد قمنا بدراسات عدة للدخول فى هذا المجال من أجل دراسة خصائص هذه المواد الكهربية وتم الحصول على العلاقة مابين التيار والجهد ودرجة الحرارة وكذلك دراسة السعة الكهربية لها وقد توصلنا الى نتائج جيدة جدا لهذه الوصلات الكهربية حيث اننا استخدمنا معدن الذهب وكذلك معدن الألمنيوم من أجل التوصيلات الكهربية وتمكنا بعد ذلك من الحصول على نتائج مطابقة بشكل كبير لماهو موجود نظريا كما هو موضح فى الجزء الخاص بالنتائج العملية فى هذا البحث .) <|cite_end|>try to disentangle different semantics in the latent space by borrowing some labels.
But none of them tries to relate an arbitrary region to a specific latent part.
\begin{figure}[t]
\centering
\includegraphics[width=1.0\linewidth]{figures/framework_linkgan.pdf}
\vspace{-15pt}
\caption{
\textbf{Concept diagram} of \method, where some axes of the latent space are \textit{explicitly} linked to the image pixels of a spatial area.
In this way, we can alter the image content within the linked region simply by resampling the latent code on these axes.
}
\label{fig:framework}
\vspace{-5pt}
\end{figure} <|paper_end|> | [
"<|reference_start|> Low-Rank Subspaces in GANs: The latent space of a Generative Adversarial Network (GAN) has been shown to encode rich semantics within some subspaces. To identify these subspaces, researchers typically analyze the statistical information from a collection of synthesized data, and the identified subspaces tend to control image attributes globally (i.e., manipulating an attribute causes the change of an entire image). By contrast, this work introduces low-rank subspaces that enable more precise control of GAN generation. Concretely, given an arbitrary image and a region of interest (e.g., eyes of face images), we manage to relate the latent space to the image region with the Jacobian matrix and then use low-rank factorization to discover steerable latent subspaces. There are three distinguishable strengths of our approach that can be aptly called LowRankGAN. First, compared to analytic algorithms in prior work, our low-rank factorization of Jacobians is able to find the low-dimensional representation of attribute manifold, making image editing more precise and controllable. Second, low-rank factorization naturally yields a null space of attributes such that moving the latent code within it only affects the outer region of interest. Therefore, local image editing can be simply achieved by projecting an attribute vector into the null space without relying on a spatial mask as existing methods do. Third, our method can robustly work with a local region from one image for analysis yet well generalize to other images, making it much easy to use in practice. Extensive experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN. <|reference_end|>",
"<|reference_start|> Orthogonal Jacobian Regularization for Unsupervised Disentanglement in Image Generation: Unsupervised disentanglement learning is a crucial issue for understanding and exploiting deep generative models. Recently, SeFa tries to find latent disentangled directions by performing SVD on the first projection of a pre-trained GAN. However, it is only applied to the first layer and works in a post-processing way. Hessian Penalty minimizes the off-diagonal entries of the output's Hessian matrix to facilitate disentanglement, and can be applied to multi-layers.However, it constrains each entry of output independently, making it not sufficient in disentangling the latent directions (e.g., shape, size, rotation, etc.) of spatially correlated variations. In this paper, we propose a simple Orthogonal Jacobian Regularization (OroJaR) to encourage deep generative model to learn disentangled representations. It simply encourages the variation of output caused by perturbations on different latent dimensions to be orthogonal, and the Jacobian with respect to the input is calculated to represent this variation. We show that our OroJaR also encourages the output's Hessian matrix to be diagonal in an indirect manner. In contrast to the Hessian Penalty, our OroJaR constrains the output in a holistic way, making it very effective in disentangling latent dimensions corresponding to spatially correlated variations. Quantitative and qualitative experimental results show that our method is effective in disentangled and controllable image generation, and performs favorably against the state-of-the-art methods. Our code is available at https://github.com/csyxwei/OroJaR <|reference_end|>",
"<|reference_start|> Is Generator Conditioning Causally Related to GAN Performance?: Recent work (Pennington et al, 2017) suggests that controlling the entire distribution of Jacobian singular values is an important design consideration in deep learning. Motivated by this, we study the distribution of singular values of the Jacobian of the generator in Generative Adversarial Networks (GANs). We find that this Jacobian generally becomes ill-conditioned at the beginning of training. Moreover, we find that the average (with z from p(z)) conditioning of the generator is highly predictive of two other ad-hoc metrics for measuring the 'quality' of trained GANs: the Inception Score and the Frechet Inception Distance (FID). We test the hypothesis that this relationship is causal by proposing a 'regularization' technique (called Jacobian Clamping) that softly penalizes the condition number of the generator Jacobian. Jacobian Clamping improves the mean Inception Score and the mean FID for GANs trained on several datasets. It also greatly reduces inter-run variance of the aforementioned scores, addressing (at least partially) one of the main criticisms of GANs. <|reference_end|>",
"<|reference_start|> {{GAN: مركبات الجاليوم نترايد ( GaN ) هى إحدى فصائل أشباه الموصلات تعتبر من أهم الوصلات الالكترونية وذلك لتميزها التطبيقي وخاصة فى المجالات الكهروضوئية ( الليزر ) والمجالات التطبيقية الالكترونية الاخرى ( High power Devices ) بسبب سعة فجوة الطاقة حيث أن هذه السعة تتراوح ما بين 1.9 ev حتى 6.28 ونتيجة لذلك نستطيع الحصول على مصادر متعددة للطاقة ومثال على ذلك ثنائيات اﻹنبعاث الضوئي (LEDs ) ومصادر ثنائية أ خرى ( LDs ) ونظرا لأهمية هذا النوع من التقنية حاليا فان هناك توجها عالميا كبيرآ جدآ على مستوى مراكز الأبحاث سواء كان ذلك فى الشركات المتخصصة أو مراكزالأبحاث فى الجامعات والمعاهد لدراسة خصائص هذه المواد من النواحي الفيزيائية والتطبيقية سواء عن طريق التصنيع أو القياسات الكهربية والضوئية والمغنا طيسية وبناء على ماسبق فقد قمنا بدراسات عدة للدخول فى هذا المجال من أجل دراسة خصائص هذه المواد الكهربية وتم الحصول على العلاقة مابين التيار والجهد ودرجة الحرارة وكذلك دراسة السعة الكهربية لها وقد توصلنا الى نتائج جيدة جدا لهذه الوصلات الكهربية حيث اننا استخدمنا معدن الذهب وكذلك معدن الألمنيوم من أجل التوصيلات الكهربية وتمكنا بعد ذلك من الحصول على نتائج مطابقة بشكل كبير لماهو موجود نظريا كما هو موضح فى الجزء الخاص بالنتائج العملية فى هذا البحث . <|reference_end|>"
] | [
0,
6,
12,
22
] | {"<|cite_1|>": "ss-805363", "<|multi_cite_2_1|>": "arxiv-110679", "<|multi_cite_2_2|>": "arxiv-195738", "<|multi_cite_2_3|>": "arxiv-310773", "<|multi_cite_2_4|>": "arxiv-323257", "<|multi_cite_2_5|>": "ss-921362", "<|multi_cite_3_1|>": "arxiv-235477", "<|multi_cite_3_2|>": "arxiv-278152", "<|multi_cite_3_4|>": "ss-1211382", "<|multi_cite_3_5|>": "ss-986563", "<|multi_cite_4_2|>": "ss-1671152", "<|multi_cite_5_2|>": "ss-921362", "<|multi_cite_6_1|>": "arxiv-184253", "<|multi_cite_6_2|>": "arxiv-238712", "<|multi_cite_7_1|>": "ss-921362", "<|multi_cite_7_2|>": "ss-1211382", "<|cite_8|>": "arxiv-238712", "<|cite_9|>": "ss-916941", "<|cite_10|>": "arxiv-105653", "<|cite_11|>": "ss-805363", "<|multi_cite_12_1|>": "arxiv-184253", "<|multi_cite_12_2|>": "arxiv-238712", "<|multi_cite_12_3|>": "arxiv-350469", "<|multi_cite_12_4|>": "ss-933274", "<|multi_cite_12_5|>": "arxiv-447650", "<|multi_cite_13_1|>": "arxiv-213068", "<|multi_cite_13_2|>": "arxiv-279776", "<|multi_cite_14_1|>": "arxiv-110679", "<|multi_cite_14_2|>": "ss-2542187", "<|multi_cite_16_1|>": "ss-921362", "<|multi_cite_16_3|>": "ss-1988895", "<|multi_cite_17_1|>": "arxiv-388769", "<|multi_cite_17_2|>": "ss-916941", "<|multi_cite_17_3|>": "arxiv-399804", "<|multi_cite_18_1|>": "ss-1295153", "<|multi_cite_18_2|>": "ss-921362", "<|multi_cite_18_3|>": "arxiv-281787", "<|multi_cite_19_1|>": "ss-921362", "<|multi_cite_19_3|>": "ss-1276232", "<|multi_cite_19_4|>": "arxiv-262409", "<|multi_cite_19_5|>": "ss-1671152", "<|multi_cite_20_1|>": "ss-921362", "<|multi_cite_20_2|>": "arxiv-235477", "<|multi_cite_20_3|>": "arxiv-278152", "<|multi_cite_20_5|>": "ss-1211382", "<|multi_cite_20_6|>": "ss-1988895", "<|multi_cite_20_7|>": "arxiv-245229", "<|multi_cite_20_8|>": "ss-1295153", "<|multi_cite_20_9|>": "ss-1276232", "<|multi_cite_20_10|>": "arxiv-216553", "<|multi_cite_20_11|>": "arxiv-315252", "<|multi_cite_21_1|>": "arxiv-181751", "<|multi_cite_21_2|>": "arxiv-279776", "<|multi_cite_21_4|>": "ss-1295153", "<|multi_cite_21_5|>": "arxiv-281787", "<|multi_cite_21_6|>": "arxiv-262409", "<|multi_cite_21_8|>": "ss-1671152", "<|multi_cite_21_9|>": "ss-986563", "<|multi_cite_22_1|>": "arxiv-181751", "<|multi_cite_22_2|>": "arxiv-279776", "<|multi_cite_22_3|>": "ss-1295153", "<|cite_23|>": "arxiv-281787", "<|multi_cite_24_2|>": "arxiv-262409", "<|multi_cite_24_4|>": "ss-1671152", "<|multi_cite_24_5|>": "ss-986563", "<|multi_cite_25_1|>": "ss-1123818", "<|multi_cite_25_2|>": "ss-1500612", "<|multi_cite_25_3|>": "arxiv-238712", "<|multi_cite_25_4|>": "ss-1295153", "<|multi_cite_25_5|>": "arxiv-361393", "<|multi_cite_25_7|>": "arxiv-286254", "<|multi_cite_25_8|>": "arxiv-447650", "<|multi_cite_26_1|>": "ss-1123818", "<|multi_cite_26_2|>": "ss-1500612", "<|cite_27|>": "arxiv-148619", "<|cite_28|>": "ss-1506562", "<|multi_cite_29_1|>": "arxiv-124793", "<|multi_cite_29_2|>": "arxiv-286254", "<|multi_cite_29_3|>": "arxiv-361393", "<|multi_cite_29_4|>": "ss-1295153", "<|multi_cite_29_6|>": "arxiv-238712", "<|multi_cite_30_1|>": "arxiv-286254", "<|multi_cite_30_2|>": "arxiv-361393", "<|cite_31|>": "arxiv-238712", "<|multi_cite_32_1|>": "arxiv-124793", "<|multi_cite_32_2|>": "ss-1295153"} |
2103.06993 | <|paper_start|> Title: Robofleet: Open Source Communication and Management for Fleets of Autonomous Robots
Abstract: Robofleet: Open Source Communication and Management for Fleets of Autonomous Robots: Long-term deployment of a fleet of mobile robots requires reliable and secure two-way communication channels between individual robots and remote human operators for supervision and tasking. Existing open-source solutions to this problem degrade in performance in challenging real-world situations such as intermittent and low-bandwidth connectivity, do not provide security control options, and can be computationally expensive on hardware-constrained mobile robot platforms. In this paper, we present Robofleet, a lightweight open-source system which provides inter-robot communication, remote monitoring, and remote tasking for a fleet of ROS-enabled service-mobile robots that is designed with the practical goals of resilience to network variance and security control in mind. Robofleet supports multi-user, multi-robot communication via a central server. This architecture deduplicates network traffic between robots, significantly reducing overall network load when compared with native ROS communication. This server also functions as a single entrypoint into the system, enabling security control and user authentication. Individual robots run the lightweight Robofleet client, which is responsible for exchanging messages with the Robofleet server. It automatically adapts to adverse network conditions through backpressure monitoring as well as topic-level priority control, ensuring that safety-critical messages are successfully transmitted. Finally, the system includes a web-based visualization tool that can be run on any internet-connected, browser-enabled device to monitor and control the fleet. We compare Robofleet to existing methods of robotic communication, and demonstrate that it provides superior resilience to network variance while maintaining performance that exceeds that of widely-used systems.
Introduction
Remote management, multi-agent communication, and user tasking for service-mobile robots is essential for
long-term deployments -- some long-term projects such as the
CoBots <|cite_start|> (Reference: The 1,000-km challenge: Insights and quantitative and qualitative results: On 18 November 2014, a team of four autonomous CoBot robots reached 1,000-km of overall autonomous navigation, as a result of a 1,000-km challenge that the authors had set three years earlier. The authors are frequently asked for the lessons learned, as well as the performance results. In this article, they introduce the challenge and contribute a detailed presentation of technical insights as well as quantitative and qualitative results. They have previously presented the algorithms for the individual technical contributions, namely robot localization, symbiotic robot autonomy, and robot task scheduling. In this article, they present the data collected over the 1,000-km challenge and analyze it to evaluate the accuracy and robustness of the localization algorithms on the CoBots. Furthermore, they present technical insights into the algorithms, which they believe are responsible for the robots' continuous robust performance.) <|cite_end|> and STRANDS <|cite_start|> (Reference: The STRANDS Project: Long-Term Autonomy in Everyday Environments: Thanks to the efforts of the robotics and autonomous systems community, robots are becoming ever more capable. There is also an increasing demand from end-users for autonomous service robots that can operate in real environments for extended periods. In the STRANDS project we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots, and deploying these systems for long-term installations in security and care environments. Over four deployments, our robots have been operational for a combined duration of 104 days autonomously performing end-user defined tasks, covering 116km in the process. In this article we describe the approach we have used to enable long-term autonomous operation in everyday environments, and how our robots are able to use their long run times to improve their own performance.) <|cite_end|> have relied
on custom remote monitoring and tasking interfaces to fulfil this need, but a
more general open-source solution for arbitrary and heterogenous fleets of
robots remains elusive.
In this paper, we present Robofleet -- a simple, robust, and reusable solution
to this problem.
An effective multi-robot, multi-user fleet management system must satisfy several key criteria -- the system must
\begin{inparaenum}
\item support multiple simultaneously deployed robots,
\item support communication both between robots as well as operators and robots,
\item have minimal compute overhead and be capable of running on low-powered devices,
\item support secure communications and secure access controls, and
\item be resilient to fluctuating network bandwidth and availability.
\end{inparaenum}
While there is no single open-source solution that meets all such criteria,
several partial solutions for remote robot monitoring and multi-robot
communication include the Robot Web Tools <|cite_start|> (Reference: Robot web tools: Efficient messaging for cloud robotics: Since its official introduction in 2012, the Robot Web Tools project has grown tremendously as an open-source community, enabling new levels of interoperability and portability across heterogeneous robot systems, devices, and front-end user interfaces. At the heart of Robot Web Tools is the rosbridge protocol as a general means for messaging ROS topics in a client-server paradigm suitable for wide area networks, and human-robot interaction at a global scale through modern web browsers. Building from rosbridge, this paper describes our efforts with Robot Web Tools to advance: 1) human-robot interaction through usable client and visualization libraries for more efficient development of front-end human-robot interfaces, and 2) cloud robotics through more efficient methods of transporting high-bandwidth topics (e.g., kinematic transforms, image streams, and point clouds). We further discuss the significant impact of Robot Web Tools through a diverse set of use cases that showcase the importance of a generic messaging protocol and front-end development systems for human-robot interaction.) <|cite_end|>,
Rosbridge <|cite_start|> (Reference: Rosbridge: ROS for Non-ROS Users: ) <|cite_end|>, and the native
inter-process communication of Robot Operating System (ROS). While
these solutions are effective at meeting the use case of single-robot
deployments or short-range remote monitoring over a single reliable network,
they exhibit degraded performance in challenging conditions such as intermittent
network connectivity or with a large number of clients.
Robofleet includes several features to meet the aforementioned needs of reliable
multi-robot, multi-user fleet management. It supports message deduplication and
automatic detection of adverse network conditions using backpressure monitoring.
In addition, it supports configuration for rate limiting of topics
combined with priority-based topic scheduling, ensuring that safety-critical
messages take precedence over others. Its single-server architecture prevents
duplication of message streams between robots, further decreasing network load
in the case of multi-robot communication.
Robofleet uses a compact message format to minimize bandwidth usage and enable
high throughput rates when compared with Rosbridge. Robofleet also provides
topic-level access control, user authentication, and supports static IP
address-based traffic control by leveraging a secure VPN. In addition to a
central server, transport layer, and robot client, Robofleet includes an
extensible web-based visualizer and tasking tool tailored towards autonomous
mobile robot deployment, which enables connection to the Robofleet system from
any browser-enabled device.
We provide experimental results to demonstrate Robofleet's superior performance
compared to existing state-of-the-art solutions in the case of adverse network
conditions and multi-robot interactions. We observe that Robofleet gracefully
recovers from intermittent connectivity $\sim 5$ times faster than
Rosbridge, and is able to maintain near constant latency as the number of robots
increases compared to compared to linear degradation using ROS. Robofleet is
available as open source code at
\url{https://github.com/ut-amrl/robofleet}.
Related Work
Beyond the explicit goals of monitoring and tasking, successfully sharing
information between robots enables a wide variety of research initiatives beyond
long-term autonomous deployment. Waibel et al. <|cite_start|> (Reference: RoboEarth: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers.) <|cite_end|>
introduce a platform consisting of communication
layers <|cite_start|> (Reference: Rapyuta: A Cloud Robotics Platform: In this paper, we present the design and implementation of Rapyuta, an open-source cloud robotics platform. Rapyuta helps robots to offload heavy computation by providing secured customizable computing environments in the cloud. The computing environments also allow the robots to easily access the RoboEarth knowledge repository. Furthermore, these computing environments are tightly interconnected, paving the way for deployment of robotic teams. We also describe three typical use cases, some benchmarking and performance results, and two proof-of-concept demonstrations. Note to Practitioners - Rapyuta allows to outsource some or all of a robot's onboard computational processes to a commercial data center. Its main difference to other, similar frameworks like the Google App Engine is that it is specifically tailored towards multiprocess high-bandwidth robotics applications/middlewares and provides a well-documented open-source implementation that can be modified to cover a large variety of robotic scenarios. Rapyuta supports the outsourcing of almost all of the current 3000+ ROS packages out of the box and is easily extensible to other robotic middleware. A pre-installed Amazon Machine Image (AMI) is provided that allows to launch Rapyuta in any of Amazon's data center within minutes. Once launched, robots can authenticate themselves to Rapyuta, create one or more secured computational environments in the cloud and launch the desired nodes/processes. The computing environments can also be arbitrarily connected to build parallel computing architectures on the fly. The WebSocket-based communication protocol, which provides synchronous and asynchronous communication mechanisms, allows not only ROS based robots, but also browsers and mobiles phones to connect to the ecosystem. Rapyuta's computing environments are private, secure, and optimized for data throughput. However, its performance is in large part determined by the latency and quality of the network connection and the performance of the data center. Optimizing performance under these constraints is typically highly application-specific. The paper illustrates an example of performance optimization in a collaborative real-time 3-D mapping application. Other target applications include collaborative 3-D mapping, task/grasp planning, object recognition, localization, and teleoperation, among others.) <|cite_end|> and databases to construct a shared ``world
model" between robots, allowing them to succeed at a wide range of tasks. While
these works seek to allow robots to share information over long time scales, in
this paper we focus on the short-term, time-sensitive information exchange
necessary to enable tasking and monitoring. In addition to long-term autonomous
deployment, multiagent communication is instrumental for applications such
as collaborative mapping <|cite_start|> (Reference: {CoSLAM: Collaborative Visual SLAM in Dynamic Environments: This paper studies the problem of vision-based simultaneous localization and mapping (SLAM) in dynamic environments with multiple cameras. These cameras move independently and can be mounted on different platforms. All cameras work together to build a global map, including 3D positions of static background points and trajectories of moving foreground points. We introduce intercamera pose estimation and intercamera mapping to deal with dynamic objects in the localization and mapping process. To further enhance the system robustness, we maintain the position uncertainty of each map point. To facilitate intercamera operations, we cluster cameras into groups according to their view overlap, and manage the split and merge of camera groups in real time. Experimental results demonstrate that our system can work robustly in highly dynamic environments and produce more accurate results in static environments.) <|cite_end|> <|cite_start|> (Reference: Cloud-Based Collaborative 3D Mapping in Real-Time With Low-Cost Robots: This paper presents an architecture, protocol, and parallel algorithms for collaborative 3D mapping in the cloud with low-cost robots. The robots run a dense visual odometry algorithm on a smartphone-class processor. Key-frames from the visual odometry are sent to the cloud for parallel optimization and merging with maps produced by other robots. After optimization the cloud pushes the updated poses of the local key-frames back to the robots. All processes are managed by Rapyuta, a cloud robotics framework that runs in a commercial data center. This paper includes qualitative visualization of collaboratively built maps, as well as quantitative evaluation of localization accuracy, bandwidth usage, processing speeds, and map storage.) <|cite_end|>, distributed
control, and cooperative team behaviors such as robot soccer <|cite_start|> (Reference: {{RoboCup: The Robot World Cup Initiative: The Robot World Cup Initiative (R, oboCup) is attempt to foster AI and intelligent rohoties research by providing a standard problem where wide range of technologies especially concerning multi-agent research (:an be integrated and examined. The first RoboCup competition is to be, heht at. IJCAI-97, Nagoya. In order for a robot team to actually perform a soccer game. various technologies must I)e incorl)orated including: design principles of autononmus agents, multi-agent collaboration, strategy acquisition, real-time rea.~oning, robotics, and sensor-fllsion. Unlike AAAI robot competition, which is tuned for a single heavy-duty slow-moving robot. RoboCup is a task for a team of multiple f‘ast-moving robots under a dynamic environmen(. Although RoboCnp’s final target is a worhl cup with real robots, RoboCup offers a soft.ware platform for reseaxch on the software aspects of RoboCup. This paper describes teclini(’M challenges involw~d in RoboCup, rules, and simulation environment.) <|cite_end|>.
There has been a significant amount of interest in this short-term communication task in the robotics community, resulting in a variety of widely-used utilities. Robot Web Tools includes an ecosystem of different tools that use the common Rosbridge transport layer. These tools include visualization layers such as Ros2DJS, client libraries such as RosLibJS and RosLibPy, and interactive dashboards <|cite_start|> (Reference: Robot web tools: Efficient messaging for cloud robotics: Since its official introduction in 2012, the Robot Web Tools project has grown tremendously as an open-source community, enabling new levels of interoperability and portability across heterogeneous robot systems, devices, and front-end user interfaces. At the heart of Robot Web Tools is the rosbridge protocol as a general means for messaging ROS topics in a client-server paradigm suitable for wide area networks, and human-robot interaction at a global scale through modern web browsers. Building from rosbridge, this paper describes our efforts with Robot Web Tools to advance: 1) human-robot interaction through usable client and visualization libraries for more efficient development of front-end human-robot interfaces, and 2) cloud robotics through more efficient methods of transporting high-bandwidth topics (e.g., kinematic transforms, image streams, and point clouds). We further discuss the significant impact of Robot Web Tools through a diverse set of use cases that showcase the importance of a generic messaging protocol and front-end development systems for human-robot interaction.) <|cite_end|> <|cite_start|> (Reference: Rosbridge: ROS for Non-ROS Users: ) <|cite_end|>. There has also been recent research interest in developing interfaces for human operators to interact with fleets of remote robots <|cite_start|> (Reference: Multi-robot Systems, Virtual Reality and ROS: Developing a New Generation of Operator Interfaces: ) <|cite_end|>. Each of these tools individually address discrete parts of the technical pipeline required to successfully deploy autonomous robots, but stitching them together into a coherent workflow currently involves significant overhead. Additionally, the core Rosbridge transport layer has critical performance and bandwidth consumption issues that limit practical use cases of these tools. A common alternative to Rosbridge is using ROS itself to facilitate communication. A shared ROS master can enable fast robot-to-robot communication when compared with Rosbridge. There has been research in systems which enable communication using a shared ROS master between robots across the internet using port forwarding <|cite_start|> (Reference: Establishing remote networks for ROS applications via Port Forwarding: In a Robot Operating System (ROS) application, robot software is often distributed across multiple networked components, forming the ROS network, where every component acts as server and/or a client, and publishing and/or receiving robot data simultaneously. For indoor robots, a local ROS network, through a Wi-Fi hotspot, is sufficient. But for outdoor robots, a remote ROS network is needed to connect the ROS application to the cloud. Although a number of cloud-based solutions support this, implementing them is challenging, as they need to be configured to facilitate ROS's unique, multidirectional, and simultaneous flow of robot data. This article presents Port Forwarding as an alternative approach, which offers a private, secured, and a direct ROS-to-ROS, eliminating the need for a dedicated middleware and its configuration and setup complexities. But Port Forwarding has its own challenges; chiefly, the beforehand knowledge of Internet addresses of all networked components and the need to update port forwarding settings when these addresses change, which they often do. This article addresses this issue (and others) and presents a detailed procedure for setting Port Forwarding for ROS applications, highlighting configuration, and troubleshooting steps. Also, the article compares between Port Forwarding and cloud-based solutions, in terms of setup, performance, and others. Results show that robot performance under Port Forwarding is on par with cloud-based solutions, but it required a fraction of setup time. The authors developed a set of shell scripts that monitor the Internet addresses of all networked components and auto-update Port Forwarding settings when they change, solving this issue. With this, Port Forwarding could be considered a viable option for ROS system networks, on par with cloud-based solutions.) <|cite_end|>. However, this architecture lacks authentication support, degrades in performance as the number of robots increases, requires that all the machines in the system run ROS, and requires special network configuration.
Robofleet outperforms Rosbridge in adverse network conditions and surpasses the remote ROS network in large fleets of robots in addition to providing features such as topic-level access control and user authentication. <|paper_end|> | [
"<|reference_start|> The 1,000-km challenge: Insights and quantitative and qualitative results: On 18 November 2014, a team of four autonomous CoBot robots reached 1,000-km of overall autonomous navigation, as a result of a 1,000-km challenge that the authors had set three years earlier. The authors are frequently asked for the lessons learned, as well as the performance results. In this article, they introduce the challenge and contribute a detailed presentation of technical insights as well as quantitative and qualitative results. They have previously presented the algorithms for the individual technical contributions, namely robot localization, symbiotic robot autonomy, and robot task scheduling. In this article, they present the data collected over the 1,000-km challenge and analyze it to evaluate the accuracy and robustness of the localization algorithms on the CoBots. Furthermore, they present technical insights into the algorithms, which they believe are responsible for the robots' continuous robust performance. <|reference_end|>",
"<|reference_start|> The STRANDS Project: Long-Term Autonomy in Everyday Environments: Thanks to the efforts of the robotics and autonomous systems community, robots are becoming ever more capable. There is also an increasing demand from end-users for autonomous service robots that can operate in real environments for extended periods. In the STRANDS project we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots, and deploying these systems for long-term installations in security and care environments. Over four deployments, our robots have been operational for a combined duration of 104 days autonomously performing end-user defined tasks, covering 116km in the process. In this article we describe the approach we have used to enable long-term autonomous operation in everyday environments, and how our robots are able to use their long run times to improve their own performance. <|reference_end|>",
"<|reference_start|> RoboEarth: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. <|reference_end|>",
"<|reference_start|> Robot web tools: Efficient messaging for cloud robotics: Since its official introduction in 2012, the Robot Web Tools project has grown tremendously as an open-source community, enabling new levels of interoperability and portability across heterogeneous robot systems, devices, and front-end user interfaces. At the heart of Robot Web Tools is the rosbridge protocol as a general means for messaging ROS topics in a client-server paradigm suitable for wide area networks, and human-robot interaction at a global scale through modern web browsers. Building from rosbridge, this paper describes our efforts with Robot Web Tools to advance: 1) human-robot interaction through usable client and visualization libraries for more efficient development of front-end human-robot interfaces, and 2) cloud robotics through more efficient methods of transporting high-bandwidth topics (e.g., kinematic transforms, image streams, and point clouds). We further discuss the significant impact of Robot Web Tools through a diverse set of use cases that showcase the importance of a generic messaging protocol and front-end development systems for human-robot interaction. <|reference_end|>"
] | [
0,
1,
4,
9
] | {"<|cite_1|>": "ss-955560", "<|cite_2|>": "arxiv-96003", "<|cite_3|>": "ss-1279541", "<|cite_4|>": "ss-1279542", "<|cite_5|>": "ss-2330250", "<|cite_6|>": "ss-713375", "<|multi_cite_7_1|>": "ss-1279543", "<|multi_cite_7_2|>": "ss-1279544", "<|cite_9|>": "ss-1118146", "<|cite_10|>": "ss-1279541", "<|cite_11|>": "ss-1279542", "<|cite_12|>": "ss-1275827", "<|cite_13|>": "ss-1279545"} |
2203.00655 | <|paper_start|> Title: A hardware-software co-design approach to minimize the use of memory resources in multi-core neuromorphic processors
Abstract: A hardware-software co-design approach to minimize the use of memory resources in multi-core neuromorphic processors: Both in electronics and biology, physical implementations of neural networks have severe energy and memory constraints. We propose a hardware-software co-design approach for minimizing the use of memory resources in multi-core neuromorphic processors, by taking inspiration from biological neural networks. We use this approach to design new routing schemes optimized for small-world networks and to provide guidelines for designing novel application-specific multi-core neuromorphic chips. Starting from the hierarchical routing scheme proposed, we present a hardware-aware placement algorithm that optimizes the allocation of resources for arbitrary network models. We validate the algorithm with a canonical small-world network and present preliminary results for other networks derived from it.
Introduction
\label{sec:introduction}
The large energy costs of \ac{DNN} and \ac{AI} algorithms are pushing the development of domain-specific hardware accelerators <|cite_start|> (Reference: Highly Efficient Test Architecture for Low-Power AI Accelerators: Low-power artificial intelligence (AI) accelerators are being developed to support the battery-operated edge devices at a minimum expense of classification error. However, the testing of such large AI accelerators with traditional techniques is inefficient in achieving the required certifications for Autonomous Driving Assistant Systems (ISO 26262). ISO 26262 sets very stringent requirements on the testing time and fault coverage during the diagnosability of faults leading to system-level failures during in-field testing. This article proposes a test architecture for low-power AI accelerators by reusing the existing data paths for large AI accelerator arrays. As compared to the full scan-DFT, the proposed test architecture reduces the test time and peak test power, which enhances the reliability of the test responses. The proposed technique reduces 1) the switching power by 87%; 2) testing times by 72% on average for cases up to $32\times 32$ ; and 3) the peak power by 59%. Further, there is an average reduction in the area by 10% for the accelerator.) <|cite_end|>.
Neuromorphic processors are a class of \ac{AI} hardware accelerators that implement computational models of \acp{SNN} adopting in-memory computing strategies and brain-inspired principles of computation <|cite_start|> (Reference: Towards spike-based machine intelligence with neuromorphic computing: ) <|cite_end|> <|cite_start|> (Reference: A recipe for creating ideal hybrid memristive-CMOS neuromorphic computing systems: The development of memristive device technologies has reached a level of maturity to enable the design of complex and large-scale hybrid memristive-CMOS neural processing systems. These systems offer promising solutions for implementing novel in-memory computing architectures for machine learning and data analysis problems. We argue that they are also ideal building blocks for the integration in neuromorphic electronic circuits suitable for ultra-low power brain-inspired sensory processing systems, therefore leading to the innovative solutions for always-on edge-computing and Internet-of-Things (IoT) applications. Here we present a recipe for creating such systems based on design strategies and computing principles inspired by those used in mammalian brains. We enumerate the specifications and properties of memristive devices required to support always-on learning in neuromorphic computing systems and to minimize their power consumption. Finally, we discuss in what cases such neuromorphic systems can complement conventional processing ones and highlight the importance of exploiting the physics of both the memristive devices and of the CMOS circuits interfaced to them.) <|cite_end|> <|cite_start|> (Reference: Memory devices and applications for in-memory computing: ) <|cite_end|>.
They represent a very promising approach, especially for edge-computing applications, as they have the potential to reduce power consumption to ultra-low (e.g., sub milliwatt) figures <|cite_start|> (Reference: Adaptive Extreme Edge Computing for Wearable Devices: Wearable devices are a fast-growing technology with impact on personal healthcare for both society and economy. Due to the widespread of sensors in pervasive and distributed networks, power consumption, processing speed, and system adaptation are vital in future smart wearable devices. The visioning and forecasting of how to bring computation to the edge in smart sensors have already begun, with an aspiration to provide adaptive extreme edge computing. Here, we provide a holistic view of hardware and theoretical solutions towards smart wearable devices that can provide guidance to research in this pervasive computing era. We propose various solutions for biologically plausible models for continual learning in neuromorphic computing technologies for wearable sensors. To envision this concept, we provide a systematic outline in which prospective low power and low latency scenarios of wearable sensors in neuromorphic platforms are expected. We successively describe vital potential landscapes of neuromorphic processors exploiting complementary metal-oxide semiconductors (CMOS) and emerging memory technologies (e.g. memristive devices). Furthermore, we evaluate the requirements for edge computing within wearable devices in terms of footprint, power consumption, latency, and data size. We additionally investigate the challenges beyond neuromorphic computing hardware, algorithms and devices that could impede enhancement of adaptive edge computing in smart wearable devices.) <|cite_end|>.
However, the requirement of \ac{SNN} hardware accelerators to store the state of each neuron, combined with their in-memory computing circuit design techniques leads to very large area consumption figures, which limits the sizes and numbers of parameters of the networks that they can implement.
The current strategy used to support the integration of large \ac{SNN} models in these accelerators is to use multi-core architectures <|cite_start|> (Reference: A Scalable Multicore Architecture With Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors ({DYNAPs: Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here, we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multicore neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed.) <|cite_end|> <|cite_start|> (Reference: {TrueNorth: 类脑计算是一种基于神经网络的全新数据存储和计算技术,通过模拟大脑的工作机理,可以突破传统计算机处理大型问题时遇到的冯·诺依曼瓶颈,在显著提高信息处理速度的同时大幅降低功耗,并且具有自我学习和自适应能力。介绍了IBM最新研究的TrueNorth神经元芯片技术,包括其基本架构、工作原理、芯片性能、应用成果等,并展望了类脑计算技术的未来发展前景。) <|cite_end|> <|cite_start|> (Reference: Loihi: A Neuromorphic Manycore Processor with On-Chip
Learning: Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions.) <|cite_end|> <|cite_start|> (Reference: SpiNNaker: A Spiking Neural Network Architecture: ) <|cite_end|>.
In these architectures, each core either \emph{emulates} with analog circuits <|cite_start|> (Reference: A Scalable Multicore Architecture With Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors ({DYNAPs: Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here, we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multicore neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed.) <|cite_end|> or \emph{simulates} with time-multiplexed digital circuits <|cite_start|> (Reference: {TrueNorth: 类脑计算是一种基于神经网络的全新数据存储和计算技术,通过模拟大脑的工作机理,可以突破传统计算机处理大型问题时遇到的冯·诺依曼瓶颈,在显著提高信息处理速度的同时大幅降低功耗,并且具有自我学习和自适应能力。介绍了IBM最新研究的TrueNorth神经元芯片技术,包括其基本架构、工作原理、芯片性能、应用成果等,并展望了类脑计算技术的未来发展前景。) <|cite_end|> <|cite_start|> (Reference: Loihi: A Neuromorphic Manycore Processor with On-Chip
Learning: Loihi is a 60-mm2 chip fabricated in Intels 14-nm process that advances the state-of-the-art modeling of spiking neural networks in silicon. It integrates a wide range of novel features for the field, such as hierarchical connectivity, dendritic compartments, synaptic delays, and, most importantly, programmable synaptic learning rules. Running a spiking convolutional form of the Locally Competitive Algorithm, Loihi can solve LASSO optimization problems with over three orders of magnitude superior energy-delay-product compared to conventional solvers running on a CPU iso-process/voltage/area. This provides an unambiguous example of spike-based computation, outperforming all known conventional solutions.) <|cite_end|> <|cite_start|> (Reference: SpiNNaker: A Spiking Neural Network Architecture: ) <|cite_end|>, neuro-synaptic arrays in which both the synaptic weight matrix and the network connectivity routing memory blocks occupy a significant proportion of the total layout area.
Although the advent of nano-scale memristive devices can mitigate this problem by enabling the construction of dense cross-bar array structures for storing the weight matrices <|cite_start|> (Reference: Memory devices and applications for in-memory computing: ) <|cite_end|>, the problem of allocating routing and connectivity resources to allow arbitrary networks (e.g., with all-to-all possible connections) at scale is of a fundamental nature that even memristors or 3D-VLSI technologies cannot solve <|cite_start|> (Reference: {Communication in neuronal networks: References 1. A.-L. Barabási, Linked: The New Science of Networks (Perseus, Cambridge, MA, 2002). 2. S. Camazine et al., Self-Organization in Biological Systems (Princeton Univ. Press, Princeton, NJ, 2001). 3. R. Solé, B. C. Goodwin, Signs of Life: How Complexity Pervades Biology (Basic Books, New York, 2000). 4. C. K. Hemelrijk, Ethology 108, 655 (2002). 5. T. D. Seeley, Am. Nat. 150, S22 (1997). 6. D. S. Wilson, L. A. Dugatkin, Am. Nat. 149, 336 (1997). 7. A. J. Moore, E. D. Brodie, J. B. Wolf, Evolution 51, 1352 (1997). 8. U. Alon, Science 301, 1866 (2003). 9. D. Bray, Science 301, 1864 (2003). 10. B. Hölldobler, E. O. Wilson, The Ants (Belknap Press of Harvard Univ. Press, Cambridge, MA, 1990). 11. K. Naumann, M. Winston, K. Slessor, G. Prestwich, F. Webster, Behav. Ecol. Sociobiol. 29, 321 (1991). 12. S. K. Robson, J. F. A. Traniello, in Information Processing in Social Insects, C. Detrain, J. L. Deneubourg, J. M. Pasteels, Eds. (Birkhauser, Basel, Switzerland, 1999), pp. 239–259. 13. T. D. Seeley, S. Camazine, J. Sneyd, Behav. Ecol. Sociobiol. 28, 277 (1991). 14. R. Albert, A.-L. Barabási, Rev. Mod. Phys. 74, 47 (2002). 15. D. M. Gordon, Am. Nat. 159, 509 (2002). 16. R. Sole, J. Montoya, Santa Fe Working Paper 00-11060 (2000). 17. H. Jeong, B. Tombor, R. Albert, Z. Oltvai, A.-L. Barabási, Nature 407, 651 (2000). 18. S. W. Pacala, D. M. Gordon, H. C. J. Godfray, Evol. Ecol. 10, 127 (1996). 19. E. Bonabeau, G. Theraulaz, J.-L. Deneubourg, J. Theor. Biol. 195, 157 (1998). 20. J. H. Fewell, M. Winston, Behav. Ecol. Sociobiol. 30, 387 (1992). 21. S. Camazine, Behav. Ecol. Sociobiol. 32, 265 (1993). 22. R. E. Page, J. Erber, Naturwissenschaften 89, 91 (2002). 23. M. Granovetter, Am. J. Sociol. 78, 1360 (1973). 24. S. Goss, S. Aron, J. L. Deneubourg, J. M. Pasteels, Naturwissenschaften 76, 579 (1989). 25. F. Saffre, R. Furey, B. Krafft, J. L. Deneubourg, J. Theor. Biol. 198, 507 (1999). 26. G. Theraulaz, E. Bonabeau, J. L. Deneubourg, Proc. R. Soc. London Ser. B 265, 327 (1998). 27. S. N. Beshers, J. H. Fewell, Annu. Rev. Entomol. 46, 413 (2001). 28. R. E. Page, S. D. Mitchell, Apidologie 29, 171 (1998). 29. J. H. Fewell, R. E. Page, Evol. Ecol. Res. 1, 537 (1999). 30. P. Hogeweg, B. Hesper, Behav. Ecol. Sociobiol. 12, 271 (1983). 31. C. K. Hemelrijk, Biol. Bull. 202, 283 (2002). 32. D. S. Wilson, Am. Nat. 150, S1 (1997).) <|cite_end|>.
Finding trade-offs to optimize both weight-matrix and connectivity/routing memory structures in multi-core neuromorphic processors can therefore have a significant impact on their total chip die area and on the size of the networks they can implement.
Following the original neuromorphic engineering approach <|cite_start|> (Reference: Neuromorphic Electronic Systems: It is shown that for many problems, particularly those in which the input data are ill-conditioned and the computation can be specified in a relative manner, biological solutions are many orders of magnitude more effective than those using digital methods. This advantage can be attributed principally to the use of elementary physical phenomena as computational primitives, and to the representation of information by the relative values of analog signals rather than by the absolute values of digital signals. This approach requires adaptive techniques to mitigate the effects of component differences. This kind of adaptation leads naturally to systems that learn about their environment. Large-scale adaptive analog systems are more robust to component degradation and failure than are more conventional systems, and they use far less power. For this reason, adaptive analog technology can be expected to utilize the full potential of wafer-scale silicon fabrication. >) <|cite_end|>, in this paper, we look at animal brains for inspiration and propose brain-inspired strategies to perform this optimization.
Specifically, we show that, by adopting small-world type network size/connectivity, we implement trade-offs that minimize area consumption requirements while still enabling the design of \ac{SNN} architectures that can solve a wide range of relevant ``edge-computing'' problems, i.e., the types of sensory-motor processing problems that animals solve in the real world.
\begin{figure}
\centering
\includegraphics[width=0.3\textwidth]{small-world-in-the-brain.png}
\caption{Small-world network connectivity in the brain.}
\label{fig:small-world-network}
\end{figure} <|paper_end|> | [
"<|reference_start|> Highly Efficient Test Architecture for Low-Power AI Accelerators: Low-power artificial intelligence (AI) accelerators are being developed to support the battery-operated edge devices at a minimum expense of classification error. However, the testing of such large AI accelerators with traditional techniques is inefficient in achieving the required certifications for Autonomous Driving Assistant Systems (ISO 26262). ISO 26262 sets very stringent requirements on the testing time and fault coverage during the diagnosability of faults leading to system-level failures during in-field testing. This article proposes a test architecture for low-power AI accelerators by reusing the existing data paths for large AI accelerator arrays. As compared to the full scan-DFT, the proposed test architecture reduces the test time and peak test power, which enhances the reliability of the test responses. The proposed technique reduces 1) the switching power by 87%; 2) testing times by 72% on average for cases up to $32\\times 32$ ; and 3) the peak power by 59%. Further, there is an average reduction in the area by 10% for the accelerator. <|reference_end|>",
"<|reference_start|> Adaptive Extreme Edge Computing for Wearable Devices: Wearable devices are a fast-growing technology with impact on personal healthcare for both society and economy. Due to the widespread of sensors in pervasive and distributed networks, power consumption, processing speed, and system adaptation are vital in future smart wearable devices. The visioning and forecasting of how to bring computation to the edge in smart sensors have already begun, with an aspiration to provide adaptive extreme edge computing. Here, we provide a holistic view of hardware and theoretical solutions towards smart wearable devices that can provide guidance to research in this pervasive computing era. We propose various solutions for biologically plausible models for continual learning in neuromorphic computing technologies for wearable sensors. To envision this concept, we provide a systematic outline in which prospective low power and low latency scenarios of wearable sensors in neuromorphic platforms are expected. We successively describe vital potential landscapes of neuromorphic processors exploiting complementary metal-oxide semiconductors (CMOS) and emerging memory technologies (e.g. memristive devices). Furthermore, we evaluate the requirements for edge computing within wearable devices in terms of footprint, power consumption, latency, and data size. We additionally investigate the challenges beyond neuromorphic computing hardware, algorithms and devices that could impede enhancement of adaptive edge computing in smart wearable devices. <|reference_end|>",
"<|reference_start|> SpiNNaker: A Spiking Neural Network Architecture: <|reference_end|>",
"<|reference_start|> A Scalable Multicore Architecture With Heterogeneous Memory Structures for Dynamic Neuromorphic Asynchronous Processors ({DYNAPs: Neuromorphic computing systems comprise networks of neurons that use asynchronous events for both computation and communication. This type of representation offers several advantages in terms of bandwidth and power consumption in neuromorphic electronic systems. However, managing the traffic of asynchronous events in large scale systems is a daunting task, both in terms of circuit complexity and memory requirements. Here, we present a novel routing methodology that employs both hierarchical and mesh routing strategies and combines heterogeneous memory structures for minimizing both memory requirements and latency, while maximizing programming flexibility to support a wide range of event-based neural network architectures, through parameter configuration. We validated the proposed scheme in a prototype multicore neuromorphic processor chip that employs hybrid analog/digital circuits for emulating synapse and neuron dynamics together with asynchronous digital circuits for managing the address-event traffic. We present a theoretical analysis of the proposed connectivity scheme, describe the methods and circuits used to implement such scheme, and characterize the prototype chip. Finally, we demonstrate the use of the neuromorphic processor with a convolutional neural network for the real-time classification of visual symbols being flashed to a dynamic vision sensor (DVS) at high speed. <|reference_end|>"
] | [
0,
4,
8,
9
] | {"<|cite_1|>": "ss-1663022", "<|multi_cite_2_1|>": "ss-891517", "<|multi_cite_2_2|>": "ss-1067174", "<|multi_cite_2_3|>": "ss-744446", "<|cite_3|>": "arxiv-312658", "<|multi_cite_4_1|>": "ss-729275", "<|multi_cite_4_2|>": "ss-678800", "<|multi_cite_4_3|>": "ss-1099019", "<|multi_cite_4_4|>": "ss-1660380", "<|cite_5|>": "ss-729275", "<|multi_cite_6_1|>": "ss-678800", "<|multi_cite_6_2|>": "ss-1099019", "<|multi_cite_6_3|>": "ss-1660380", "<|cite_7|>": "ss-744446", "<|cite_8|>": "ss-1941579", "<|cite_9|>": "ss-1426387"} |
2104.08541-1 | <|cite_start|> (Reference: YOLOv3: An Incremental Improvement: We present some updates to YOLO! We made a bunch of little design changes to make it better. We also trained this new network that's pretty swell. It's a little bigger than last time but more accurate. It's still fast though, don't worry. At 320x320 YOLOv3 runs in 22 ms at 28.2 mAP, as accurate as SSD but three times faster. When we look at the old .5 IOU mAP detection metric YOLOv3 is quite good. It achieves 57.9 mAP@50 in 51 ms on a Titan X, compared to 57.5 mAP@50 in 198 ms by RetinaNet, similar performance but 3.8x faster. As always, all the code is online at https://pjreddie.com/yolo/) <|cite_end|>to ground the referred instance. RCCF <|cite_start|> (Reference: A Real-Time Cross-modality Correlation Filtering Method for Referring Expression Comprehension: Referring expression comprehension aims to localize the object instance described by a natural language expression. Current referring expression methods have achieved good performance. However, none of them is able to achieve real-time inference without accuracy drop. The reason for the relatively slow inference speed is that these methods artificially split the referring expression comprehension into two sequential stages including proposal generation and proposal ranking. It does not exactly conform to the habit of human cognition. To this end, we propose a novel Realtime Cross-modality Correlation Filtering method (RCCF). RCCF reformulates the referring expression comprehension as a correlation filtering process. The expression is first mapped from the language domain to the visual domain and then treated as a template (kernel) to perform correlation filtering on the image feature map. The peak value in the correlation heatmap indicates the center points of the target box. In addition, RCCF also regresses a 2-D object size and 2-D offset. The center point coordinates, object size and center point offset together to form the target bounding box. Our method runs at 40 FPS while achieving leading performance in RefClef, RefCOCO, RefCOCO+ and RefCOCOg benchmarks. In the challenging RefClef dataset, our methods almost double the state-of-the-art performance (34.70% increased to 63.79%). We hope this work can arouse more attention and studies to the new cross-modality correlation filtering framework as well as the one-stage framework for referring expression comprehension.) <|cite_end|>formulates the visual grounding problem as a correlation filtering process <|cite_start|> (Reference: {Visual Object Tracking Using Adaptive Correlation Filters: Although not commonly used, correlation filters can track complex objects through rotations, occlusions and other distractions at over 20 times the rate of current state-of-the-art techniques. The oldest and simplest correlation filters use simple templates and generally fail when applied to tracking. More modern approaches such as ASEF and UMACE perform better, but their training needs are poorly suited to tracking. Visual tracking requires robust filters to be trained from a single frame and dynamically adapted as the appearance of the target object changes. This paper presents a new type of correlation filter, a Minimum Output Sum of Squared Error (MOSSE) filter, which produces stable correlation filters when initialized using a single frame. A tracker based upon MOSSE filters is robust to variations in lighting, scale, pose, and nonrigid deformations while operating at 669 frames per second. Occlusion is detected based upon the peak-to-sidelobe ratio, which enables the tracker to pause and resume where it left off when the object reappears.) <|cite_end|> <|cite_start|> (Reference: High-Speed Tracking with Kernelized Correlation Filters: The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies -- any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the Discrete Fourier Transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new Kernelized Correlation Filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call Dual Correlation Filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source.) <|cite_end|>, and picks the peak value of the correlation heatmap as the center of target objects. The recent work ReSC <|cite_start|> (Reference: Improving One-stage Visual Grounding by Recursive Sub-query Construction: We improve one-stage visual grounding by addressing current limitations on grounding long and complex queries. Existing one-stage methods encode the entire language query as a single sentence embedding vector, e.g., taking the embedding from BERT or the hidden state from LSTM. This single vector representation is prone to overlooking the detailed descriptions in the query. To address this query modeling deficiency, we propose a recursive sub-query construction framework, which reasons between image and query for multiple rounds and reduces the referring ambiguity step by step. We show our new one-stage method obtains 5.0%, 4.5%, 7.5%, 12.8% absolute improvements over the state-of-the-art one-stage baseline on ReferItGame, RefCOCO, RefCOCO+, and RefCOCOg, respectively. In particular, superior performances on longer and more complex queries validates the effectiveness of our query modeling.) <|cite_end|>devises a recursive sub-query construction module to address the limitations of FAOA <|cite_start|> (Reference: A Fast and Accurate One-Stage Approach to Visual Grounding: We propose a simple, fast, and accurate one-stage approach to visual grounding, inspired by the following insight. The performances of existing propose-and-rank two-stage methods are capped by the quality of the region candidates they propose in the first stage --- if none of the candidates could cover the ground truth region, there is no hope in the second stage to rank the right region to the top. To avoid this caveat, we propose a one-stage model that enables end-to-end joint optimization. The main idea is as straightforward as fusing a text query's embedding into the YOLOv3 object detector, augmented by spatial features so as to account for spatial mentions in the query. Despite being simple, this one-stage approach shows great potential in terms of both accuracy and speed for both phrase localization and referring expression comprehension, according to our experiments. Given these results along with careful investigations into some popular region proposals, we advocate for visual grounding a paradigm shift from the conventional two-stage methods to the one-stage framework.) <|cite_end|>on grounding complex queries.
\begin{figure}[t]
\centering {\includegraphics[width=0.48\textwidth]{figs/framework_v3.pdf}}
\caption{An overview of our proposed TransVG framework. It consists of four main components: (1) a visual branch, (2) a linguistic branch, (3) a visual-linguistic fusion module, and (4) a prediction head to regress the box coordinates.}
\label{fig:framework}
\end{figure}
\subsection{Transformer}
Transformer is first proposed in <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|>to tackle the neural machine translation (NMT). The primary component of a transformer layer is the attention module, which scans through the input sequence in parallel and aggregates the information of the whole sequence with adaptive weights. Compared to the recurrent units in RNNs <|cite_start|> (Reference: Long {Short-Term} memory: Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59× speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to nonLSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available1.) <|cite_end|> <|cite_start|> (Reference: Recurrent neural network based language model: A new recurrent neural network based language model (RNN LM) with applications to speech recognition is presented. Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model. Speech recognition experiments show around 18% reduction of word error rate on the Wall Street Journal task when comparing models trained on the same amount of data, and around 5% on the much harder NIST RT05 task, even when the backoff model is trained on much more data than the RNN LM. We provide ample empirical evidence to suggest that connectionist language models are superior to standard n-gram techniques, except their high computational (training) complexity. Index Terms: language modeling, recurrent neural networks, speech recognition) <|cite_end|> <|cite_start|> (Reference: Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks: Because of their superior ability to preserve sequence information over time, Long Short-Term Memory (LSTM) networks, a type of recurrent neural network with a more complex computational unit, have obtained strong results on a variety of sequence modeling tasks. The only underlying LSTM structure that has been explored so far is a linear chain. However, natural language exhibits syntactic properties that would naturally combine words to phrases. We introduce the Tree-LSTM, a generalization of LSTMs to tree-structured network topologies. Tree-LSTMs outperform all existing systems and strong LSTM baselines on two tasks: predicting the semantic relatedness of two sentences (SemEval 2014, Task 1) and sentiment classification (Stanford Sentiment Treebank).) <|cite_end|>, the attention mechanism exhibits better performance in processing long sequences. This superiority has attracted a surge of research interest in applications of transformers in NLP tasks <|cite_start|> (Reference: Universal Transformers: Recurrent neural networks (RNNs) sequentially process data by updating their state with each new data point, and have long been the de facto choice for sequence modeling tasks. However, their inherently sequential computation makes them slow to train. Feed-forward and convolutional architectures have recently been shown to achieve superior results on some sequence modeling tasks such as machine translation, with the added advantage that they concurrently process all inputs in the sequence, leading to easy parallelization and faster training times. Despite these successes, however, popular feed-forward sequence models like the Transformer fail to generalize in many simple tasks that recurrent models handle with ease, e.g. copying strings or even simple logical inference when the string or formula lengths exceed those observed at training time. We propose the Universal Transformer (UT), a parallel-in-time self-attentive recurrent sequence model which can be cast as a generalization of the Transformer model and which addresses these issues. UTs combine the parallelizability and global receptive field of feed-forward sequence models like the Transformer with the recurrent inductive bias of RNNs. We also add a dynamic per-position halting mechanism and find that it improves accuracy on several tasks. In contrast to the standard Transformer, under certain assumptions, UTs can be shown to be Turing-complete. Our experiments show that UTs outperform standard Transformers on a wide range of algorithmic and language understanding tasks, including the challenging LAMBADA language modeling task where UTs achieve a new state of the art, and machine translation where UTs achieve a 0.9 BLEU improvement over Transformers on the WMT14 En-De dataset.) <|cite_end|> <|cite_start|> (Reference: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).) <|cite_end|> <|cite_start|> (Reference: Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer: Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, we explore the landscape of transfer learning techniques for NLP by introducing a unified framework that converts all text-based language problems into a text-to-text format. Our systematic study compares pre-training objectives, architectures, unlabeled data sets, transfer approaches, and other factors on dozens of language understanding tasks. By combining the insights from our exploration with scale and our new ``Colossal Clean Crawled Corpus'', we achieve state-of-the-art results on many benchmarks covering summarization, question answering, text classification, and more. To facilitate future work on transfer learning for NLP, we release our data set, pre-trained models, and code.) <|cite_end|> <|cite_start|> (Reference: Incorporating BERT into Neural Machine Translation: The recently proposed BERT has shown great power on a variety of natural language understanding tasks, such as text classification, reading comprehension, etc. However, how to effectively apply BERT to neural machine translation (NMT) lacks enough exploration. While BERT is more commonly used as fine-tuning instead of contextual embedding for downstream language understanding tasks, in NMT, our preliminary exploration of using BERT as contextual embedding is better than using for fine-tuning. This motivates us to think how to better leverage BERT for NMT along this direction. We propose a new algorithm named BERT-fused model, in which we first use BERT to extract representations for an input sequence, and then the representations are fused with each layer of the encoder and decoder of the NMT model through attention mechanisms. We conduct experiments on supervised (including sentence-level and document-level translations), semi-supervised and unsupervised machine translation, and achieve state-of-the-art results on seven benchmark datasets. Our code is available at \url{https://github.com/bert-nmt/bert-nmt}.) <|cite_end|>and speech recognition <|cite_start|> (Reference: Streaming automatic speech recognition with the transformer model: Encoder-decoder based sequence-to-sequence models have demonstrated state-of-the-art results in end-to-end automatic speech recognition (ASR). Recently, the transformer architecture, which uses self-attention to model temporal context information, has been shown to achieve significantly lower word error rates (WERs) compared to recurrent neural network (RNN) based system architectures. Despite its success, the practical usage is limited to offline ASR tasks, since encoder-decoder architectures typically require an entire speech utterance as input. In this work, we propose a transformer based end-to-end ASR system for streaming ASR, where an output must be generated shortly after each spoken word. To achieve this, we apply time-restricted self-attention for the encoder and triggered attention for the encoder-decoder attention mechanism. Our proposed streaming transformer architecture achieves 2.8% and 7.2% WER for the "clean" and "other" test data of LibriSpeech, which to our knowledge is the best published streaming end-to-end ASR result for this task.) <|cite_end|> <|cite_start|> (Reference: Transformer-based Acoustic Modeling for Hybrid Speech Recognition: We propose and evaluate transformer-based acoustic models (AMs) for hybrid speech recognition. Several modeling choices are discussed in this work, including various positional embedding methods and an iterated loss to enable training deep transformers. We also present a preliminary study of using limited right context in transformer models, which makes it possible for streaming applications. We demonstrate that on the widely used Librispeech benchmark, our transformer-based AM outperforms the best published hybrid result by 19% to 26% relative when the standard n-gram language model (LM) is used. Combined with neural network LM for rescoring, our proposed approach achieves state-of-the-art results on Librispeech. Our findings are also confirmed on a much larger internal dataset.) <|cite_end|>.
\textbf{Transformer in Vision Tasks.} Inspired by the great success of transformers in neural machine translation, a series of transformers <|cite_start|> (Reference: End-to-End Object Detection with Transformers: We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines. Training code and pretrained models are available at https://github.com/facebookresearch/detr.) <|cite_end|> <|cite_start|> (Reference: Pre-Trained Image Processing Transformer: As the computing power of modern hardware is increasing strongly, pre-trained deep learning models (e.g., BERT, GPT-3) learned on large-scale datasets have shown their effectiveness over conventional methods. The big progress is mainly contributed to the representation ability of transformer and its variant architectures. In this paper, we study the low-level computer vision task (e.g., denoising, super-resolution and deraining) and develop a new pre-trained model, namely, image processing transformer (IPT). To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs. The IPT model is trained on these images with multi-heads and multi-tails. In addition, the contrastive learning is introduced for well adapting to different image processing tasks. The pre-trained model can therefore efficiently employed on desired task after fine-tuning. With only one pre-trained model, IPT outperforms the current state-of-the-art methods on various low-level benchmarks. Code is available at https://github.com/huawei-noah/Pretrained-IPT and https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/IPT) <|cite_end|> <|cite_start|> (Reference: {Generative Pretraining From Pixels: Inspired by progress in unsupervised representation learning for natural language, we examine whether similar models can learn useful representations for images. We train a sequence Transformer to auto-regressively predict pixels, without incorporating knowledge of the 2D input structure. Despite training on low-resolution ImageNet without labels, we find that a GPT-2 scale model learns strong image representations as measured by linear probing, fine-tuning, and low-data classification. On CIFAR-10, we achieve 96.3% accuracy with a linear probe, outperforming a supervised Wide ResNet, and 99.0% accuracy with full fine-tuning, matching the top supervised pre-trained models. An even larger model trained on a mix-ture of ImageNet and web images is competitive with self-supervised benchmarks on ImageNet, achieving 72.0% top-1 accuracy on a linear probe of our features.) <|cite_end|> <|cite_start|> (Reference: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.) <|cite_end|> <|cite_start|> (Reference: Hand-Transformer: Non-Autoregressive Structured Modeling for 3D Hand Pose Estimation: ) <|cite_end|> <|cite_start|> (Reference: Learning Texture Transformer Network for Image Super-Resolution: We study on image super-resolution (SR), which aims to recover realistic textures from a low-resolution (LR) image. Recent progress has been made by taking high-resolution images as references (Ref), so that relevant textures can be transferred to LR images. However, existing SR approaches neglect to use attention mechanisms to transfer high-resolution (HR) textures from Ref images, which limits these approaches in challenging cases. In this paper, we propose a novel Texture Transformer Network for Image Super-Resolution (TTSR), in which the LR and Ref images are formulated as queries and keys in a transformer, respectively. TTSR consists of four closely-related modules optimized for image generation tasks, including a learnable texture extractor by DNN, a relevance embedding module, a hard-attention module for texture transfer, and a soft-attention module for texture synthesis. Such a design encourages joint feature learning across LR and Ref images, in which deep feature correspondences can be discovered by attention, and thus accurate texture features can be transferred. The proposed texture transformer can be further stacked in a cross-scale way, which enables texture recovery from different levels (e.g., from 1x to 4x magnification). Extensive experiments show that TTSR achieves significant improvements over state-of-the-art approaches on both quantitative and qualitative evaluations.) <|cite_end|> <|cite_start|> (Reference: Learning Joint Spatial-Temporal Transformations for Video Inpainting: High-quality video inpainting that completes missing regions in video frames is a promising yet challenging task. State-of-the-art approaches adopt attention models to complete a frame by searching missing contents from reference frames, and further complete whole videos frame by frame. However, these approaches can suffer from inconsistent attention results along spatial and temporal dimensions, which often leads to blurriness and temporal artifacts in videos. In this paper, we propose to learn a joint Spatial-Temporal Transformer Network (STTN) for video inpainting. Specifically, we simultaneously fill missing regions in all input frames by self-attention, and propose to optimize STTN by a spatial-temporal adversarial loss. To show the superiority of the proposed model, we conduct both quantitative and qualitative evaluations by using standard stationary masks and more realistic moving object masks. Demo videos are available at https://github.com/researchmm/STTN.) <|cite_end|> <|cite_start|> (Reference: Deformable DETR: Deformable Transformers for End-to-End Object Detection: DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To mitigate these issues, we proposed Deformable DETR, whose attention modules only attend to a small set of key sampling points around a reference. Deformable DETR can achieve better performance than DETR (especially on small objects) with 10 times less training epochs. Extensive experiments on the COCO benchmark demonstrate the effectiveness of our approach. Code is released at https://github.com/fundamentalvision/Deformable-DETR.) <|cite_end|>applied to vision tasks have been proposed. The infusive work DETR <|cite_start|> (Reference: End-to-End Object Detection with Transformers: We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines. Training code and pretrained models are available at https://github.com/facebookresearch/detr.) <|cite_end|>formulates object detection as a set prediction problem. It introduces a small set of learnable object queries, reasons global context and object relations with attention mechanism, and outputs the final set of predictions in parallel. ViT <|cite_start|> (Reference: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.) <|cite_end|>shows that a pure transformer can achieve excellent performance on image classification tasks. More recently, a pre-trained image processing transformer (IPT) is introduced in <|cite_start|> (Reference: Pre-Trained Image Processing Transformer: As the computing power of modern hardware is increasing strongly, pre-trained deep learning models (e.g., BERT, GPT-3) learned on large-scale datasets have shown their effectiveness over conventional methods. The big progress is mainly contributed to the representation ability of transformer and its variant architectures. In this paper, we study the low-level computer vision task (e.g., denoising, super-resolution and deraining) and develop a new pre-trained model, namely, image processing transformer (IPT). To maximally excavate the capability of transformer, we present to utilize the well-known ImageNet benchmark for generating a large amount of corrupted image pairs. The IPT model is trained on these images with multi-heads and multi-tails. In addition, the contrastive learning is introduced for well adapting to different image processing tasks. The pre-trained model can therefore efficiently employed on desired task after fine-tuning. With only one pre-trained model, IPT outperforms the current state-of-the-art methods on various low-level benchmarks. Code is available at https://github.com/huawei-noah/Pretrained-IPT and https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/IPT) <|cite_end|>to address the low-level vision problems, \emph{e.g.}, denoising, super-resolution and deraining.
\textbf{Transformer in Vision-Language Tasks.} Motivated by the powerful pre-trained model of BERT <|cite_start|> (Reference: BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding: We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5% (7.7% point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).) <|cite_end|>, some researchers start to investigate visual-linguistic pre-training (VLP) <|cite_start|> (Reference: UNITER: UNiversal Image-TExt Representation Learning: Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage fine-grained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OT-based WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR$^2$. Code is available at https://github.com/ChenRocks/UNITER.) <|cite_end|> <|cite_start|> (Reference: Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks: Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.) <|cite_end|> <|cite_start|> (Reference: ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks: We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, pro-cessing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -- visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -- by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific models -- achieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability.) <|cite_end|> <|cite_start|> (Reference: VL-BERT: Pre-training of Generic Visual-Linguistic Representations: We introduce a new pre-trainable generic representation for visual-linguistic tasks, called Visual-Linguistic BERT (VL-BERT for short). VL-BERT adopts the simple yet powerful Transformer model as the backbone, and extends it to take both visual and linguistic embedded features as input. In it, each element of the input is either of a word from the input sentence, or a region-of-interest (RoI) from the input image. It is designed to fit for most of the visual-linguistic downstream tasks. To better exploit the generic representation, we pre-train VL-BERT on the massive-scale Conceptual Captions dataset, together with text-only corpus. Extensive empirical analysis demonstrates that the pre-training procedure can better align the visual-linguistic clues and benefit the downstream tasks, such as visual commonsense reasoning, visual question answering and referring expression comprehension. It is worth noting that VL-BERT achieved the first place of single model on the leaderboard of the VCR benchmark. Code is released at \url{https://github.com/jackroos/VL-BERT}.) <|cite_end|> <|cite_start|> (Reference: TAP: Text-Aware Pre-training for Text-VQA and Text-Caption: In this paper, we propose Text-Aware Pre-training (TAP) for Text-VQA and Text-Caption tasks. These two tasks aim at reading and understanding scene text in images for question answering and image caption generation, respectively. In contrast to the conventional vision-language pre-training that fails to capture scene text and its relationship with the visual and text modalities, TAP explicitly incorporates scene text (generated from OCR engines) in pre-training. With three pre-training tasks, including masked language modeling (MLM), image-text (contrastive) matching (ITM), and relative (spatial) position prediction (RPP), TAP effectively helps the model learn a better aligned representation among the three modalities: text word, visual object, and scene text. Due to this aligned representation learning, even pre-trained on the same downstream task dataset, TAP already boosts the absolute accuracy on the TextVQA dataset by +5.4%, compared with a non-TAP baseline. To further improve the performance, we build a large-scale dataset based on the Conceptual Caption dataset, named OCR-CC, which contains 1.4 million scene text-related image-text pairs. Pre-trained on this OCR-CC dataset, our approach outperforms the state of the art by large margins on multiple tasks, i.e., +8.3% accuracy on TextVQA, +8.6% accuracy on ST-VQA, and +10.2 CIDEr score on TextCaps.) <|cite_end|>to jointly represent images and texts. In general, these models take the object proposals and text as inputs, and devise several transformer encoder layers for joint representation learning. Plenty of pre-training tasks are introduced, including image-text matching (ITM), word-region alignment (WRA), masked language modeling (MLM), masked region modeling (MRM), \emph{etc.}
Despite with similar base units (\ie transformer encoder layers), the goal of VLP is to learn a generalizable vision-language representation with large-scale data to facilitate down-stream tasks. In contrast, we focus on developing a novel transformer-based visual grounding framework, and learning to perform homogeneous multi-modal fusion and reasoning with a small amount of visual grounding data. <|paper_end|> | [
"<|reference_start|> Incorporating BERT into Neural Machine Translation: The recently proposed BERT has shown great power on a variety of natural language understanding tasks, such as text classification, reading comprehension, etc. However, how to effectively apply BERT to neural machine translation (NMT) lacks enough exploration. While BERT is more commonly used as fine-tuning instead of contextual embedding for downstream language understanding tasks, in NMT, our preliminary exploration of using BERT as contextual embedding is better than using for fine-tuning. This motivates us to think how to better leverage BERT for NMT along this direction. We propose a new algorithm named BERT-fused model, in which we first use BERT to extract representations for an input sequence, and then the representations are fused with each layer of the encoder and decoder of the NMT model through attention mechanisms. We conduct experiments on supervised (including sentence-level and document-level translations), semi-supervised and unsupervised machine translation, and achieve state-of-the-art results on seven benchmark datasets. Our code is available at \\url{https://github.com/bert-nmt/bert-nmt}. <|reference_end|>",
"<|reference_start|> UNITER: UNiversal Image-TExt Representation Learning: Joint image-text embedding is the bedrock for most Vision-and-Language (V+L) tasks, where multimodality inputs are simultaneously processed for joint visual and textual understanding. In this paper, we introduce UNITER, a UNiversal Image-TExt Representation, learned through large-scale pre-training over four image-text datasets (COCO, Visual Genome, Conceptual Captions, and SBU Captions), which can power heterogeneous downstream V+L tasks with joint multimodal embeddings. We design four pre-training tasks: Masked Language Modeling (MLM), Masked Region Modeling (MRM, with three variants), Image-Text Matching (ITM), and Word-Region Alignment (WRA). Different from previous work that applies joint random masking to both modalities, we use conditional masking on pre-training tasks (i.e., masked language/region modeling is conditioned on full observation of image/text). In addition to ITM for global image-text alignment, we also propose WRA via the use of Optimal Transport (OT) to explicitly encourage fine-grained alignment between words and image regions during pre-training. Comprehensive analysis shows that both conditional masking and OT-based WRA contribute to better pre-training. We also conduct a thorough ablation study to find an optimal combination of pre-training tasks. Extensive experiments show that UNITER achieves new state of the art across six V+L tasks (over nine datasets), including Visual Question Answering, Image-Text Retrieval, Referring Expression Comprehension, Visual Commonsense Reasoning, Visual Entailment, and NLVR$^2$. Code is available at https://github.com/ChenRocks/UNITER. <|reference_end|>",
"<|reference_start|> Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks: Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks. <|reference_end|>",
"<|reference_start|> ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks: We present ViLBERT (short for Vision-and-Language BERT), a model for learning task-agnostic joint representations of image content and natural language. We extend the popular BERT architecture to a multi-modal two-stream model, pro-cessing both visual and textual inputs in separate streams that interact through co-attentional transformer layers. We pretrain our model through two proxy tasks on the large, automatically collected Conceptual Captions dataset and then transfer it to multiple established vision-and-language tasks -- visual question answering, visual commonsense reasoning, referring expressions, and caption-based image retrieval -- by making only minor additions to the base architecture. We observe significant improvements across tasks compared to existing task-specific models -- achieving state-of-the-art on all four tasks. Our work represents a shift away from learning groundings between vision and language only as part of task training and towards treating visual grounding as a pretrainable and transferable capability. <|reference_end|>"
] | [
13,
28,
29,
30
] | {"<|multi_cite_1_1|>": "arxiv-86728", "<|multi_cite_1_2|>": "arxiv-103126", "<|multi_cite_2_1|>": "ss-1301953", "<|multi_cite_2_2|>": "arxiv-77941", "<|multi_cite_3_1|>": "arxiv-87106", "<|multi_cite_3_2|>": "ss-1295339", "<|multi_cite_4_1|>": "arxiv-86728", "<|multi_cite_4_2|>": "arxiv-103173", "<|multi_cite_4_3|>": "arxiv-121451", "<|multi_cite_4_4|>": "arxiv-103126", "<|multi_cite_5_1|>": "arxiv-183830", "<|multi_cite_5_2|>": "arxiv-223884", "<|multi_cite_5_3|>": "arxiv-219164", "<|cite_6|>": "arxiv-121451", "<|cite_7|>": "arxiv-219164", "<|cite_8|>": "arxiv-146300", "<|multi_cite_9_1|>": "arxiv-184200", "<|multi_cite_9_2|>": "arxiv-224287", "<|multi_cite_9_3|>": "arxiv-260178", "<|cite_10|>": "arxiv-183785", "<|cite_11|>": "arxiv-282431", "<|multi_cite_12_1|>": "arxiv-103126", "<|multi_cite_12_2|>": "arxiv-86728", "<|multi_cite_12_3|>": "arxiv-121451", "<|cite_13|>": "arxiv-219164", "<|cite_14|>": "ss-1301953", "<|cite_15|>": "arxiv-77941", "<|cite_16|>": "arxiv-103126", "<|cite_17|>": "arxiv-103126", "<|cite_18|>": "arxiv-86728", "<|multi_cite_19_1|>": "arxiv-208037", "<|multi_cite_19_2|>": "arxiv-111392", "<|multi_cite_19_3|>": "arxiv-183785", "<|multi_cite_19_4|>": "arxiv-121451", "<|multi_cite_19_5|>": "arxiv-184200", "<|multi_cite_19_6|>": "arxiv-224287", "<|multi_cite_19_7|>": "arxiv-146300", "<|multi_cite_19_8|>": "arxiv-142358", "<|multi_cite_19_9|>": "arxiv-140380", "<|multi_cite_20_1|>": "arxiv-183830", "<|multi_cite_20_2|>": "arxiv-223884", "<|multi_cite_20_3|>": "arxiv-219473", "<|multi_cite_20_4|>": "arxiv-282431", "<|multi_cite_20_5|>": "arxiv-219164", "<|multi_cite_21_1|>": "arxiv-140970", "<|multi_cite_21_2|>": "arxiv-121451", "<|multi_cite_22_1|>": "arxiv-146300", "<|multi_cite_22_2|>": "arxiv-142358", "<|multi_cite_23_1|>": "arxiv-121451", "<|multi_cite_23_2|>": "arxiv-121564", "<|multi_cite_24_1|>": "arxiv-86728", "<|multi_cite_24_2|>": "arxiv-103173", "<|multi_cite_24_3|>": "arxiv-87542", "<|multi_cite_25_1|>": "arxiv-86728", "<|multi_cite_25_2|>": "arxiv-87542", "<|multi_cite_25_3|>": "arxiv-103126", "<|cite_26|>": "arxiv-146300", "<|multi_cite_27_1|>": "arxiv-183785", "<|multi_cite_27_2|>": "arxiv-184200", "<|multi_cite_27_3|>": "arxiv-224287", "<|cite_28|>": "arxiv-193730", "<|multi_cite_29_1|>": "ss-1281314", "<|multi_cite_29_2|>": "arxiv-131233", "<|multi_cite_29_3|>": "arxiv-195849", "<|cite_30|>": "arxiv-219164", "<|cite_31|>": "arxiv-154187", "<|cite_32|>": "arxiv-223884", "<|multi_cite_33_1|>": "ss-771389", "<|multi_cite_33_2|>": "arxiv-60186", "<|cite_34|>": "arxiv-282431", "<|cite_35|>": "arxiv-219164", "<|cite_36|>": "arxiv-126595", "<|multi_cite_37_1|>": "ss-710343", "<|multi_cite_37_2|>": "ss-1931808", "<|multi_cite_37_3|>": "arxiv-73807", "<|multi_cite_38_1|>": "arxiv-165462", "<|multi_cite_38_2|>": "arxiv-175879", "<|multi_cite_38_3|>": "arxiv-230407", "<|multi_cite_38_4|>": "arxiv-248635", "<|multi_cite_39_1|>": "arxiv-242529", "<|multi_cite_39_2|>": "arxiv-230081", "<|multi_cite_40_1|>": "arxiv-267742", "<|multi_cite_40_2|>": "arxiv-306906", "<|multi_cite_40_3|>": "ss-738929", "<|multi_cite_40_4|>": "arxiv-298443", "<|multi_cite_40_5|>": "ss-706498", "<|multi_cite_40_6|>": "arxiv-270036", "<|multi_cite_40_7|>": "arxiv-279732", "<|multi_cite_40_8|>": "arxiv-294884", "<|cite_41|>": "arxiv-267742", "<|cite_42|>": "arxiv-298443", "<|cite_43|>": "arxiv-306906", "<|cite_44|>": "arxiv-175879", "<|multi_cite_45_1|>": "arxiv-225610", "<|multi_cite_45_2|>": "arxiv-259146", "<|multi_cite_45_3|>": "arxiv-217761", "<|multi_cite_45_4|>": "arxiv-219957", "<|multi_cite_45_5|>": "arxiv-308707"} |
2112.12296 | <|paper_start|> Title: Sub-Chain Beam for mmWave Devices: A Trade-off between Power Saving and Beam Correspondence
Abstract: Sub-Chain Beam for mmWave Devices: A Trade-off between Power Saving and Beam Correspondence: Beam correspondence, or downlink-uplink (DL-UL) beam reciprocity, refers to the assumption that the best beams in the DL are also the best beams in the UL. This is an important assumption that allows the existing beam management framework in 5G to rely heavily on DL beam sweeping and avoid UL beam sweeping: UL beams are inferred from the measurements of the DL reference signals. Beam correspondence holds when the radio configurations are symmetric in the DL and UL. However, as mmWave technology matures, the DL and the UL face different constraints often breaking the beam correspondence. For example, power constraints may require a UE to activate only a portion of its antenna array for UL transmission, while still activating the full array for DL reception. Meanwhile, if the UL beam with sub-array, named as sub-chain beam in this paper, has a similar radiation pattern as the DL beam, the beam correspondence can still hold. This paper proposes methods for sub-chain beam codebook design to achieve a trade-off between the power saving and beam correspondence.
Introduction
In the millimeter wave band, antenna arrays are usually adopted by UE to generate high-gain beams than the single antenna, and thus resulting in higher SNR and throughput. For example, <|cite_start|> (Reference: Beam Codebook Design for 5G mmWave Terminals: A beam codebook of 5G millimeter wave (mmWave) for data communication consists of multiple high-peak-gain beams to compensate the high pathloss at the mmWave bands. These beams also have to point to different angular directions, such that by performing beam searching over the codebook, a good mmWave signal coverage over the full sphere around the terminal (spherical coverage) can be achieved. A model-based beam codebook design that assumes ideal omni-directional antenna pattern, and neglects the impact of terminal housing around the antenna, does not work well because the radiation pattern of a practical mmWave antenna combined with the impact of terminal housing is highly irregular. In this paper, we propose a novel and efficient data-driven method to generate a beam codebook to boost the spherical coverage of mmWave terminals. The method takes as inputs the measured or simulated electric field response data of each antenna and provides the codebook according to the requirements on the codebook size, spherical coverage, and so on. The method can be applied in a straightforward manner to different antenna type, antenna array configuration, placement, and terminal housing design. Our simulation results show that the proposed method generates a codebook better than the benchmark and 802.15.3c codebooks in terms of the spherical coverage.) <|cite_end|> <|cite_start|> (Reference: Antenna Placement and Performance Tradeoffs with Hand Blockage in Millimeter Wave Systems: The ongoing commercial deployment of millimeter wave systems brings into focus a number of practical issues in form factor user equipment (UE) design. With wavelengths becoming smaller, antenna gain patterns becoming directional, and link budgets critically dependent on beamforming, it becomes imperative to use a number of antenna modules at different locations of the UE for good performance. While more antennas/modules can enhance beamforming array gains, it comes with the tradeoff of higher component cost, power consumption of the associated radio frequency circuitry, and a beam management overhead in learning the appropriate beam weights. Thus, the goal of a good UE design is to provide robust spherical coverage corresponding to good array gains over the entire sphere around the UE with a low beam management overhead, complexity, and cost. The scope of this paper is to study the implications of two popular commercial millimeter wave UE designs (a face and an edge design) on spherical coverage. We show that analog beam codebooks can result in good performance for both the designs, and the edge design provides a better tradeoff in terms of robust performance (with hand blockage), beam management overhead, implementation complexity from an antenna placement standpoint and cost.) <|cite_end|> <|cite_start|> (Reference: Hand Grip Impact on 5G mmWave Mobile Devices: This paper contributes a comprehensive study on the effect of the user hand grip on the design of 5G millimeter-wave (mmWave) mobile handsets, specifically in terms of the antenna module placement and the beamforming codebook. The high-frequency structure simulator (HFSS) is used to characterize the radiation fields for different antenna placements and 14 possible handgrip profiles based on the experiments that we conducted. The loss from hand blockage on the antenna gains can be up to 20–25 dB, which implies that the possible hand grip profiles need to be taken into account while designing the antenna module placement and beamforming codebook. Specifically, we consider three different codebook adaption schemes: a grip-aware scheme, where perfect knowledge of the hand grip is available; a semi-aware scheme, where just the application (voice call, messaging, and so on) and the orientation of the mobile handset is known; and a grip-agnostic scheme, where the codebook ignores the hand blockage. Our results show that the ideal grip-aware scheme can provide more than 50% gain in terms of the spherical coverage over the agnostic scheme, depending on the grip and orientation. Encouragingly, the more practical semi-aware scheme that we propose provides performance approaching the fully grip-aware scheme. Overall, we demonstrate that the 5G mmWave handsets are different from pre-5G handsets: the user grip needs to be explicitly factored into the antenna placement and the codebook design.) <|cite_end|> <|cite_start|> (Reference: Spherical Coverage Characterization of 5G Millimeter Wave User Equipment With 3GPP Specifications: Millimeter-wave (mmWave) frequency bands are promising candidate spectrum for the fifth-generation (5G) mobile communication system, but it requires high directional antenna systems to be applied to the base station and the user equipment (UE) for compensating the high path loss. Due to the randomness of mobile wireless channels, antenna systems in a mobile UE must own a large spherical coverage, which raises new challenges for the performance characterization of 5G mmWave UEs. In the latest specification of the Third-Generation Partnership Project (3GPP), the requirement on UE’s spherical coverage in mmWave frequencies is defined, which is evaluated with the cumulative distribution function of the effective isotropic radiated power. In this paper, the spherical coverage of mmWave UEs is characterized based on the specification of 3GPP, where the impact of device integration, antenna topologies, and user body blockage on the spherical coverage of UE will be analyzed with simulation and measurement results.) <|cite_end|> presented possible antenna array setups for mmWave 5G phones, where $2 \times 1$, $4 \times 1$ or $2 \times 2$ arrays are adopted.
One of the key challenges for 5G and beyond is the UE power consumption <|cite_start|> (Reference: Six Key Challenges for Beam Management in 5.5G and 6G Systems: Future cellular networks will increasingly rely on the millimeter-wave bands to increase capacity. Migrating to ever higher carrier frequencies will require increasingly directional beamforming to establish and maintain the link. Intelligent beam management (BM) protocols will be critical for establishing and maintaining connections between the base station and user equipment in a dynamic channel. This article first provides a brief overview of the BM protocol in Release 15 of 5G New Radio, then identifies six major challenges to BM for later 5G releases that will likely persist into 6G. We describe the trends and issues behind each of the six challenges, and provide recommendations and suggested research directions to address them.) <|cite_end|>. The battery life and temperature control issue aggravates in the mmWave bands compared to the sub-6 GHz band.
When the phone is heating up quickly, one straightforward solution is to fall back to the sub-6 GHz and turn off the mmWave array.
The LTE fallback is not desired in general. First, the maximum data rate decreases from Gbps to a few hundred Mbps or less. Second, the frequent turn off/on of the mmWave antenna module incurs additional latency, power consumption, and even service disruption.
Instead of falling back to LTE, an alternative solution is to reduce the number of activated antenna elements.
Such kind of beam which only activates a part of the array is called `sub-chain beam' in this paper, since the antennas on the same array are connected to the same RF chain. Meanwhile, the beam which activates the whole array is called `full-chain beam'. Note that the deactivation approach has been used to create wide beams <|cite_start|> (Reference: Suboptimal Beam Search Algorithm and Codebook Design for Millimeter-Wave Communications: ) <|cite_end|> <|cite_start|> (Reference: Hierarchical Codebook Design for Beamforming Training in Millimeter-Wave Communication: In millimeter-wave communication, large antenna arrays are required to achieve high power gain by steering towards each other with narrow beams, which poses the problem to efficiently search the best beam direction in the angle domain at both Tx and Rx sides. As the exhaustive search is time consuming, hierarchical search has been widely accepted to reduce the complexity, and its performance is highly dependent on the codebook design. In this paper, we propose two basic criteria for the hierarchical codebook design, and devise an efficient hierarchical codebook by jointly exploiting sub-array and deactivation (turning-off) antenna processing techniques, where closed-form expressions are provided to generate the codebook. Performance evaluations are conducted under different system and channel models. Results show superiority of the proposed codebook over the existing alternatives.) <|cite_end|> in hierarchical codebook design, but in this paper, it is utilized to save the power and prevent overheating, instead of broadening the beamwidth.
An example of the sub-chain beam operation is shown in \figref{fig:DL_UL}, where UE activates only a portion of its antenna array for UL transmission, while still activating the full array for DL reception. This operation scheme is chosen since 1) the transmission consumes much more power than the reception, and 2) the downlink data rate requirement is usually higher than the uplink.
\begin{figure}[t]
\centering
\subfigure[Downlink: UE activates all antennas for reception]{
\includegraphics[width= 0.9\linewidth]{Figs/DL_5_chain_beams.png}
\label{fig:DL_5_chain}}
\subfigure[Uplink: UE activates 3 antennas for transmission]{
\includegraphics[width= 0.9\linewidth]{Figs/UL_3_chain_beams.png}
\label{fig:UL_3_chain}}
\caption{UE receives with full-chain to maximize the beam gain and transmit with sub-chain to save the power and/or control the temperature. Dashed curves stand for Rx beams and solid curves stand for Tx beams.}
\label{fig:DL_UL}
\end{figure}
The 5G standard has defined the process of identifying and maintaining a suitable beam pair for the BS-UE link, which is known as beam management (BM) <|cite_start|> (Reference: Modular and High-Resolution Channel State Information and Beam Management for 5G New Radio: This article provides an overview of key features pertaining to CSI reporting and beam management for the 5G New Radio (NR) currently being standardized in 3GPP. For CSI reporting, the modular design framework and high-resolution spatial information feedback offer not only flexibility in a host of use cases and deployment scenarios, but also improved average user throughput over state-of-the-art 4G LTE. To accommodate cellular communications in the milimeter-wave regime where a combination of analog and digital beamforming is typically used at both a base station and user equipment, beam management procedures such as measurement, reporting, and recovery are introduced. The utility and joint usage of these two features are demonstrated along with some potential upgrades for the next phase of 5G NR. Introduction) <|cite_end|> <|cite_start|> (Reference: A Tutorial on Beam Management for 3GPP NR at mmWave Frequencies: The millimeter wave (mmWave) frequencies offer the availability of huge bandwidths to provide unprecedented data rates to next-generation cellular mobile terminals. However, mmWave links are highly susceptible to rapid channel variations and suffer from severe free-space pathloss and atmospheric absorption. To address these challenges, the base stations and the mobile terminals will use highly directional antennas to achieve sufficient link budget in wide area networks. The consequence is the need for precise alignment of the transmitter and the receiver beams, an operation which may increase the latency of establishing a link, and has important implications for control layer procedures, such as initial access, handover and beam tracking. This tutorial provides an overview of recently proposed measurement techniques for beam and mobility management in mmWave cellular networks, and gives insights into the design of accurate, reactive and robust control schemes suitable for a 3GPP NR (NR) cellular network. We will illustrate that the best strategy depends on the specific environment in which the nodes are deployed, and give guidelines to inform the optimal choice as a function of the system parameters.) <|cite_end|> <|cite_start|> (Reference: Beam Management in Millimeter-Wave Communications for 5G and Beyond: Massive MIMO is one of the promising techniques to improve spectral efficiency and network performance for reaching its targeted multi-gigabit throughput in 5G systems. For 5G New Radio (NR) systems, one of the key differences compared to 4G systems is the utilization of high frequency millimeter wave (mmWave) bands in addition to sub-6GHz bands. To keep the complexity and implementation cost low, hybrid analog-digital beam-forming with large-scale antenna array has become a common design approach to address the issue of higher propagation loss as well as to improve spectral efficiency in mmWave communication in 5G NR. The 5G NR standard is designed to adapt to different beam-forming architecture and deployment scenarios. This paper provides the overview on beam management procedure according to the current 5G standardization progress. We discuss some major challenges of millimeter-wave communications encountered in the current 5G NR standard and present some expected enhancements considered for the future beyond-5G standard.) <|cite_end|> <|cite_start|> (Reference: Orientation-Assisted Beam Management for Beyond 5G Systems: Finding the optimal transmit and receive beam pair for reliable communication can be challenging, especially in highly dynamic environments. Side-information from on-board sensors at the user equipment (UE) can be used to aid the beam management (BM) process. In this work, we use the orientation information coming from inertial measurement unit (IMU) for effective BM. Specifically, we use particle filter (PF) to fuse the reference signal received power (RSRP) information with orientation information. We perform extensive simulations using realistic ray-tracing channels, practical beam patterns, and various UE movement and rotation speeds. Simulation results show the proposed strategy can greatly improve the beam prediction accuracy and reduce the power loss caused by sub-optimal beam-selection.) <|cite_end|>.
The above operation scheme could destroy the DL-UL beam correspondence in 5G BM, which refers to the assumption that the best beams in the downlink direction are also the best beams in the uplink direction.
DL-UL beam correspondence is an important design criterion, since an additional separate UL beam management procedure will be required if there is no DL-UL beam correspondence.
In this paper, we propose methods to design the sub-chain beam codebook to maximally maintain the beam correspondence. That means that if the full-chain Beam B3 in \figref{fig:DL_5_chain} is the best Rx beam in the downlink, the corresponding sub-chain Beam B3 in \figref{fig:UL_3_chain} should be the best Tx beam too.
Another option of power saving is to scale down the transmission power level of all the antennas together. Although the total radiated power can be same between sub-chain and this option, the total power consumption is different. Activating a power amplifier (PA) requires a base power regardless of the power level. The option of reducing the power level still needs to turn on all the PAs, which means more base power consumption. In contrast, if we turn off some PAs (sub-chain beam), we can save not only the transmitted power but also the base power.
\textit{Notation:} Bold uppercase letter $\mathbf{A}$ and bold lowercase letter $\mathbf{a}$ represents a matrix and a column vector, respectively.
$\left(\cdot \right)^T, \left(\cdot\right)^*, \left(\cdot\right)^H$ denotes the transpose, conjugate and Hermitian of a vector or matrix, respectively. $\|\ba\|_0$ is the L0 norm of the vector $\ba$.
$[\ba]_{m:n}$ stands for the a sub-vector of $\ba$ from the $m$-th entry to the $n$-th entry.
$\mathbbm{1}\{\cdot\}$ is the indicator function. <|paper_end|> | [
"<|reference_start|> Spherical Coverage Characterization of 5G Millimeter Wave User Equipment With 3GPP Specifications: Millimeter-wave (mmWave) frequency bands are promising candidate spectrum for the fifth-generation (5G) mobile communication system, but it requires high directional antenna systems to be applied to the base station and the user equipment (UE) for compensating the high path loss. Due to the randomness of mobile wireless channels, antenna systems in a mobile UE must own a large spherical coverage, which raises new challenges for the performance characterization of 5G mmWave UEs. In the latest specification of the Third-Generation Partnership Project (3GPP), the requirement on UE’s spherical coverage in mmWave frequencies is defined, which is evaluated with the cumulative distribution function of the effective isotropic radiated power. In this paper, the spherical coverage of mmWave UEs is characterized based on the specification of 3GPP, where the impact of device integration, antenna topologies, and user body blockage on the spherical coverage of UE will be analyzed with simulation and measurement results. <|reference_end|>",
"<|reference_start|> Suboptimal Beam Search Algorithm and Codebook Design for Millimeter-Wave Communications: <|reference_end|>",
"<|reference_start|> Hierarchical Codebook Design for Beamforming Training in Millimeter-Wave Communication: In millimeter-wave communication, large antenna arrays are required to achieve high power gain by steering towards each other with narrow beams, which poses the problem to efficiently search the best beam direction in the angle domain at both Tx and Rx sides. As the exhaustive search is time consuming, hierarchical search has been widely accepted to reduce the complexity, and its performance is highly dependent on the codebook design. In this paper, we propose two basic criteria for the hierarchical codebook design, and devise an efficient hierarchical codebook by jointly exploiting sub-array and deactivation (turning-off) antenna processing techniques, where closed-form expressions are provided to generate the codebook. Performance evaluations are conducted under different system and channel models. Results show superiority of the proposed codebook over the existing alternatives. <|reference_end|>",
"<|reference_start|> Modular and High-Resolution Channel State Information and Beam Management for 5G New Radio: This article provides an overview of key features pertaining to CSI reporting and beam management for the 5G New Radio (NR) currently being standardized in 3GPP. For CSI reporting, the modular design framework and high-resolution spatial information feedback offer not only flexibility in a host of use cases and deployment scenarios, but also improved average user throughput over state-of-the-art 4G LTE. To accommodate cellular communications in the milimeter-wave regime where a combination of analog and digital beamforming is typically used at both a base station and user equipment, beam management procedures such as measurement, reporting, and recovery are introduced. The utility and joint usage of these two features are demonstrated along with some potential upgrades for the next phase of 5G NR. Introduction <|reference_end|>"
] | [
3,
5,
6,
7
] | {"<|multi_cite_1_1|>": "ss-2168644", "<|multi_cite_1_2|>": "arxiv-186512", "<|multi_cite_1_3|>": "ss-1227766", "<|multi_cite_1_4|>": "ss-2168645", "<|cite_2|>": "ss-952746", "<|multi_cite_3_1|>": "ss-1563404", "<|multi_cite_3_2|>": "arxiv-86697", "<|multi_cite_4_1|>": "ss-2168646", "<|multi_cite_4_2|>": "ss-922872", "<|multi_cite_4_3|>": "ss-2168647", "<|multi_cite_4_4|>": "ss-2168648"} |
1307.1584 | <|paper_start|> Title: Comparing Data-mining Algorithms Developed for Longitudinal Observational Databases
Abstract: Comparing Data-mining Algorithms Developed for Longitudinal Observational Databases: Longitudinal observational databases have become a recent interest in the post marketing drug surveillance community due to their ability of presenting a new perspective for detecting negative side effects. Algorithms mining longitudinal observation databases are not restricted by many of the limitations associated with the more conventional methods that have been developed for spontaneous reporting system databases. In this paper we investigate the robustness of four recently developed algorithms that mine longitudinal observational databases by applying them to The Health Improvement Network (THIN) for six drugs with well document known negative side effects. Our results show that none of the existing algorithms was able to consistently identify known adverse drug reactions above events related to the cause of the drug and no algorithm was superior.
Introduction
Medical drugs are prescribed frequently throughout the world but each time a patient takes a drug there is a risk of the patient developing a side effect, referred to as an adverse drug reaction (ADR). The purpose of a prescription drug is to improve a patient's medical state, but ironically, sometimes ADRs can cause a patient's medical state to deteriorate. To prevent this occurring it is important to know all the ADRs that can occur and to be able to identify patients that have a high risk for developing a specific ADR. Obvious ADRs can often be found during clinical trials but the main purpose of a clinical trial is to determine the effectiveness of the drug being tested and not to identify all the possible ADRs. Less obvious ADRs, long term usage ADRs, ADRs resulting from co-prescription of drugs or ADRs that occur in subgroups of the population that are underrepresented in clinical trials (for example children and pregnant females) can only be detected by continuously monitoring patients who are prescribed the drug after marketing, a process known as post marketing surveillance.
The majority of the methods implemented for post marketing surveillance use a database known as the Spontaneous Reporting System (SRS) database containing voluntary reports of suspected drug/s and adverse drug event pairs <|cite_start|> (Reference: Quantitative signal detection using spontaneous ADR reporting: Quantitative methods are increasingly used to analyse spontaneous reports. We describe the core concepts behind the most common methods, the proportional reporting ratio (PRR), reporting odds ratio (ROR), information component (IC) and empirical Bayes geometric mean (EBGM). We discuss the role of Bayesian shrinkage in screening spontaneous reports, the importance of changes over time in screening the properties of the measures. Additionally we discuss three major areas of controversy and ongoing research: stratification, method evaluation and implementation. Finally we give some suggestions as to where emerging research is likely to lead. Copyright © 2009 John Wiley & Sons, Ltd.) <|cite_end|> <|cite_start|> (Reference: A Bayesian neural network method for adverse drug reaction signal generation: ) <|cite_end|> <|cite_start|> (Reference: Bayesian Data Mining in Large Frequency Tables, with an Application to the FDA Spontaneous Reporting System: Abstract A common data mining task is the search for associations in large databases. Here we consider the search for “interestingly large” counts in a large frequency table, having millions of cells, most of which have an observed frequency of 0 or 1. We first construct a baseline or null hypothesis expected frequency for each cell, and then suggest and compare screening criteria for ranking the cell deviations of observed from expected count. A criterion based on the results of fitting an empirical Bayes model to the cell counts is recommended. An example compares these criteria for searching the FDA Spontaneous Reporting System database maintained by the Division of Pharmacovigilance and Epidemiology. In the example, each cell count is the number of reports combining one of 1,398 drugs with one of 952 adverse events (total of cell counts = 4.9 million), and the problem is to screen the drug-event combinations for possible further investigation.) <|cite_end|> <|cite_start|> (Reference: A comparison of measures of disproportionality for signal detection in spontaneous reporting systems for adverse drug reactions: A continuous systematic review of all combinations of drugs and suspected adverse reactions (ADRs) reported to a spontaneous reporting system, is necessary to optimize signal detection. To focus attention of human reviewers, quantitative procedures can be used to sift data in different ways. In various centres, different measures are used to quantify the extent to which an ADR is reported disproportionally to a certain drug compared to the generality of the database. The objective of this study is to examine the level of concordance of the various estimates to the measure used by the WHO Collaborating Centre for International ADR monitoring, the information component (IC), when applied to the dataset of the Netherlands Pharmacovigilance Foundation Lareb.) <|cite_end|>. The SRS database is known to have duplicated, missing and incorrect entries <|cite_start|> (Reference: Perspectives on the Use of Data Mining in Pharmacovigilance: ) <|cite_end|>. It is also common for SRS databases to be prone to under-reporting as ADRs corresponding to less serious medical events may not be reported or ADRs that are very rare may never be suspected by anyone. As a consequence of these issues, it may not be possible to identify all ADRs by mining the SRS databases or identification may only be possible after many thousands of patients have been prescribed the drug and had the ADR. This is undesirable as many patients may die before a rare but fatal ADR is identified.
A new type of medical database, known as the longitudinal observational database (LOD), has recently gained interest from the research community <|cite_start|> (Reference: The Emerging Role of Electronic Medical Records in Pharmacogenomics: ) <|cite_end|> for post marketing surveillance as it does not rely on voluntary reports and offers a new perspective for detecting ADRs. One example of a LOD is The Health Improvement Network (THIN) database (www.epic-uk.org) that contains medical and prescription records for all registered patients at participating general practices in the UK. General practitioners are required to enter all the medical events they are made aware of, so serious but rare ADRs are likely to be contained within the database.
There are currently four algorithms based on sequential pattern mining techniques that have been developed to mine LODs to detect ADRs but each algorithm has only been applied to one type of LOD and there has been no research to compare the four algorithms and identify conditions where one algorithm is preferable. In this paper we applied the existing algorithms to the THIN database for six drugs with generally well known ADRs and compare the measure known as the Mean Average Precision (MAP) <|cite_start|> (Reference: Disproportionality methods for pharmacovigilance in longitudinal observational databases: Data mining disproportionality methods (PRR, ROR, EBGM, IC, etc.) are commonly used to identify drug safety signals in spontaneous report system (SRS) databases. Newer data sources such as longitudinal observational databases (LOD) provide time-stamped patient-level information and overcome some of the SRS limitations such as an absence of the denominator, total number of patients who consume a drug, and limited temporal information. Application of the disproportionality methods to LODs has not been widely explored. The scale of the LOD data provides an interesting computational challenge. Larger health claims databases contain information on more than 50 million patients and each patient has records for up to 10 years. In this article we systematically explore the application of commonly used disproportionality methods to simulated and real LOD data.) <|cite_end|> that indicates how well each algorithm can rank known ADRs above medical events that are unlikely to be ADRs from a collection of events. We also calculated the precision of each algorithm on the top 10 and 50 ranked events for each drug.
The continuation of this paper is as follows. Section two contains descriptions of the THIN database, the drugs investigated and the four existing algorithms, including information relating to the LODs each algorithm was previously applied to. Details of the method use to compare the existing algorithms can be found in section three. Section four contains the results and is followed by a discussion of the implications of the results in section five. The paper finishes with the conclusion in section six.
Related Work
\begin{table*}[t]
\centering
\caption{General information on the population of patients prescribed each drug that was investigated .
Total is the number of prescriptions of the drug in the database and includes repeat prescriptions, First is the number of first time prescriptions of the drug and $13$ months is the number of prescriptions where the drug was recorded as being prescribed for the first time in $13$ months for a patient. The average age and gender ratio were calculated by considering all prescriptions of the drug.
}
\label{tab:pat}
\begin{tabular}{cccccc} \\
Drug & Total & First & 13 months &Average Age (St Dev) &Gender Ratio (F/M) \\
\hline
Ciprofloxacin & $483\;217$& $277\;871$ & $322\;482$ & $57.25(19.95)$ & $1.28$\\
Norfloxacin & $30\;043$ &$15\;160$ &$17\;390$ & $59.23(19.74)$& $2.90$ \\
Doxepin & $73\;684$ & $7607$ & $8265$ & $64.46(16.22)$ & $2.65$ \\
Nifedipine & $2\;905\;177$ &$144\;356$ &$154\;128$ & $69.70(12.04)$ & $1.09$ \\
Benzylpenicillin Sodium & $1217$ & $1003$ & $1048$ & $26.07(24.89)$ & $1.10$ \\
Glibenclamide & $418\;473$ & $15\;222$ & $16\;445$ & $67.83(11.38)$ & $0.82$ \\
\end{tabular}
\end{table*}
\subsection{THIN Database}
The THIN database consists of a collection of information obtained from participating UK general practices including information on patients such as their year of birth, gender and family connections and the demographics of the area they live in. The database also contains temporal information detailing a patient's prescription and medical event histories since registration. For this comparison a database containing records from 495 general practices was used. This subset of the THIN database contained approximately four million patients, over 358 million prescription entries and over 233 million medical event entries.
Each medical event is recorded in the database by a reference code known as a Read code. The Read codes used in the THIN database are an independent system designed specifically for primary care but every ICD-9-CM (International Classification of Diseases, Ninth Edition, Clinical Modification) code (or analogues) have a corresponding Read code <|cite_start|> (Reference: The use of electronic databases in primary care research: DISCOVERY Research Group, Peninsula College of Medicine & Dentistry, Veysey Building, Salmon Pool Lane, Exeter, EX2 4SG, UK and Doctoral student, School of Social & Community Medicine, University of Bristol, Canynge Hall, 39 Whatley Road, Bristol, BS8 2PS, UK. *Correspondence to W Hamilton Peninsula College of Medicine & Dentistry, Veysey Building, Salmon Pool Lane, Exeter, EX2 4SG, UK; E-mail: [email protected] Received 31 May 2011; Accepted 6 June 2011.) <|cite_end|>. The Read codes suffer from redundancy as different Read codes can correspond to the same medical event, for example there are $15$ Read codes for the medical event `vomiting' under a range of categories including `History/symptoms', `Symptoms, signs and ill-defined conditions', `Infections and parasitic diseases' and `Unspecified conditions'.
A known issue of the THIN database is that it is common to have incorrect time stamps of medical events corresponding to newly register patients. As patients can change general practices at any age, when they register they may have a history of events that a doctor needs to record. The term `registration event dropping' is used when historic or previously diagnosed events of newly registered patients are entered into the database. For example, when a new patient first visits their doctor they may inform the doctor of a previously diagnosed chronic illness such as `diabetes'. This medical event will then be input into the database with a date corresponding to the visit, rather than the actual date the patient was diagnosed with diabetes. As the dates recorded for the `registration event drops' are frequently incorrect, including them in a research study will bias results. Research suggests that 'registration event dropping' is significantly reduced after a patient is registered for a year. To prevent 'registration event dropping' biasing the results in this study, the first 12 months of medical history after registration are ignored for each patient as justified in <|cite_start|> (Reference: The relationship between time since registration and measured incidence rates in the general practice research database: The General Practice Research Database (GPRD) is widely used to study incidence rates. This study examines whether incidence rates are overestimated during the first year after registration, how long one needs to wait to obtain accurate incidence rates, and whether the time period of overestimation differs among disease types.) <|cite_end|>.
As patients can move to a different practice at any time (or die), in this study we only include prescriptions into the study where the corresponding patient is still active for a minimum of 30 days after. This prevents the bias due to `under-reporting' of ADRs that may occur if a patient no longer attends the practice. The last date a patient is active is considered to be the maximum date of any record for the patient or the patient's date of death.
\subsection{Drugs Investigated}
Six different drugs with variable attributes and prescription indications (cause of the prescription) were chosen for the investigation. The drug Nifedipine is a calcium channel blocker that helps relax the smooth muscles in the heart and blood vessels allowing blood to flow with greater ease and is therefore used to treat prophylaxis of angina, hypertension or Raynaud�s phenomenon. The penicillin Benzylpenicillin Sodium and fluoroquinolones Ciprofloxacin and Norfloxacin are three antibiotics used to treat bacterial infections. The other two drugs investigated are Doxepin a tricyclic antidepressant and Glibenclamide a sulfonylurea used to treat type 2 diabetes mellitus.
The drugs Ciprofloxacin and Norfloxacin were chosen to investigate how good the data mining methods are when applied to different drug population sizes (the collection of patients prescribed the drug) as the majority of known ADRs are the same for both drugs but the number of prescriptions recorded in the database are $483\;217$ and $30\;043$ for Ciprofloxacin and Norfloxacin respectively. The two drugs also have a similar average age of the patients when prescribed the drug but have different drug population gender distributions. Nifedipine was previously chosen to investigate one of the existing algorithms, so we also chose to use Nifedipine in this study to gain some insight into how robust the existing algorithms are when applied to different LODs.
Benzylpenicillin Sodium has the lowest number of prescriptions of any of the drugs chosen for the study, with only 1217 prescriptions being recorded in the database. Benzylpenicillin Sodium is also considered a fairly safe drug with only a few known ADRs being listed. This will test how well the algorithms do with small amounts of data and fewer ADRs to detect. It also has the lowest average age that the patients are prescribed the drug, 26 years, compared to the other drugs with average ages ranging from 57 years to 70 years. Doxepin and Glibenclamide were chosen for variety as they are prescribed for different illnesses than the other drugs.
Table \ref{tab:pat} presents the number of patients that are prescribed each drug in the database and lists general statistics such as the mean age and patient gender ratio (females/males) for each drug and algorithm pair.
\subsection{Existing Algorithms}
\subsubsection{Algorithm 1}
Methods that are implemented on Spontaneous Reporting System (SRS) databases only have limited information as the drug prescription rates and background incident rates of medical events are both unknown. To overcome these issues disproportionality methods are implemented as the calculations are independent of the medical event and drug prescription rates, due to these terms cancelling out during division. The disproportionality methods make use of a contingency table, see Table \ref{tab:srs}. For example, a disproportionality algorithm known as the Reporting Odds Ratio (ROR), first contrasts how often the event of interest occurs with the drug of interest compared to any other drug ($\frac{w_{00}}{w_{10}}$) and then compares this with the contrast of how often any other event occurs with the drug of interest compared to any other drug ($\frac{w_{01}}{w_{11}}$), see Eq. \ref{eq:ror}.
\begin{equation}
\label{eq:ror}
ROR = \frac{w_{00}/w_{10}}{w_{01}/w_{11}}
\end{equation}
Previous work has investigated applying algorithms developed for SRS databases after transforming a LOD into a SRS style database by inferring suspected drug and medical events pairs that are ADRs <|cite_start|> (Reference: Disproportionality methods for pharmacovigilance in longitudinal observational databases: Data mining disproportionality methods (PRR, ROR, EBGM, IC, etc.) are commonly used to identify drug safety signals in spontaneous report system (SRS) databases. Newer data sources such as longitudinal observational databases (LOD) provide time-stamped patient-level information and overcome some of the SRS limitations such as an absence of the denominator, total number of patients who consume a drug, and limited temporal information. Application of the disproportionality methods to LODs has not been widely explored. The scale of the LOD data provides an interesting computational challenge. Larger health claims databases contain information on more than 50 million patients and each patient has records for up to 10 years. In this article we systematically explore the application of commonly used disproportionality methods to simulated and real LOD data.) <|cite_end|>. Zorych {\it et al.} implemented three different ways to map the LOD into an SRS database, including a mapping called 'Modified-spontaneous reporting system' that incorporates the additional information on the number of patients that do not have any suspected ADRs after a drug and the background rate of medical events available in LODs. They found that incorporating the addition information did not improve results in the simulated and real databases studied, consequently in this paper we chose to transform the THIN database using their 'SRS mapping' as this method is more efficient. The values in the contingency table for drug $X$ and medical event $Y$ using the 'SRS mapping' are calculated such that:
\begin{description}
\item[$w_{00}$] is the number of distinct occurrences of event $Y$ within 30 days after the drug $X$ is prescribed
\item[$w_{01}$] is the number of distinct occurrences of non Y medical events that occur within 30 days after the drug $X$ is prescribed
\item[$w_{10}$] is the number of distinct occurrences of event $Y$ within 30 days after any drug other than $X$ is prescribed
\item[$w_{11}$] number of distinct occurrences of non $Y$ events within 30 days after any non $X$ drug is prescribed.
\end{description}
The SRS disproportionality algorithm implemented in this paper is the ROR, where medical events are ranked in descending order of the left bound of the 90\% confidence interval of the ROR, Eq. \ref{eq:ror2}, as previous work showed that the $ROR_{05}$ was consistently better than the ROR <|cite_start|> (Reference: Disproportionality methods for pharmacovigilance in longitudinal observational databases: Data mining disproportionality methods (PRR, ROR, EBGM, IC, etc.) are commonly used to identify drug safety signals in spontaneous report system (SRS) databases. Newer data sources such as longitudinal observational databases (LOD) provide time-stamped patient-level information and overcome some of the SRS limitations such as an absence of the denominator, total number of patients who consume a drug, and limited temporal information. Application of the disproportionality methods to LODs has not been widely explored. The scale of the LOD data provides an interesting computational challenge. Larger health claims databases contain information on more than 50 million patients and each patient has records for up to 10 years. In this article we systematically explore the application of commonly used disproportionality methods to simulated and real LOD data.) <|cite_end|>.
\begin{equation}
\label{eq:ror2}
ROR_{05} = exp ( ln(\frac{w_{00}/w_{10}}{w_{01}/w_{11}}) \\ -1.645 \times \sqrt{ \frac{1}{w_{00}} +\frac{1}{w_{01}} +\frac{1}{w_{10}} +\frac{1}{w_{11}} })
\end{equation}
\begin{table}
\centering
\caption{Contingency table used in existing SRS methods.}
\label{tab:srs}
\begin{tabular}{c|cc}
& Event j =Yes & Event j =No \\ \hline
Drug i=Yes & w$_{00}$ & w$_{01}$ \\
Drug i=No & w$_{10}$ & w$_{11}$ \\
\end{tabular}
\end{table}
\subsubsection{Algorithm 2}
Noren {\it et al.} developed, specifically for LODs, a disproportionality based sequential pattern mining algorithm that uses temporal information contained in LODs to contrast the Observed to Expected ratio (OE ratio) of an event and drug pair between two different time periods <|cite_start|> (Reference: Temporal pattern discovery in longitudinal electronic patient records: ) <|cite_end|>. The database the OE ratio was developed for is the UK IMS Disease Analyzer, a database containing UK general practice records containing over two million patients and 120 million prescriptions. The database contained $3 445$ drugs and $5 753$ medical events encoded by the ICD-10 <|cite_start|> (Reference: Research on classification of emergency materials: Proper classification of emergency materials is essential to build an emergency reserve. According to two new dimensions, which are both the importance of emergency materials and the difficulty degree of market access, emergency materials are divided into nine categories. This paper evaluates the emergency materials using the fuzzy comprehensive evaluation method so as to determine the classification of emergency materials. The appropriate reserves strategies are formulated when the type of emergency materials is identified. An example is provided to apply this model and method.) <|cite_end|>. When the UK IMS Disease Analyzer database was mined by the OE ratio for the drug Nifedipine it was found to have a $precision_{10}=0.7$.
The OE ratio algorithm compares the number of patients that have the first prescription of drug $x$ (in thirteen months) followed by event $y$ within a set time $t$ relative to the expected number of patients if drug $x$ and event $y$ were independent. Letting $n_{xy}^{t}$ denote the number of patients that have drug $x$ for the first time and event $y$ occurs within time period $t$, $n_{.y}^{t}$ denote the number of patients that are prescribed any drug for the first time and have event $y$ within time period $t$. $n_{x.}^{t}$ denote the number of patients that have drug $x$ for the first time with an active follow up in time period $t$ and $n_{..}^{t}$ denote the number of patients that have any drug for the first time with an active follow up in time period $t$. The expected number of patients that have drug $x$ and then event $y$ in a time period $t$ is then,
\begin{equation}
E_{xy}^{t} = n_{x.}^{t} \frac{n_{.y}^{t}}{n_{..}^{t}}
\label{eq:expec}
\end{equation}
If for a given drug, the event occurs more than expected, the ratio between the observed and expected will be greater than one. By taking the $log_{2}$ of the ratio, a positive value suggests an interesting association between a drug and event. Modifying the equation to prevent the problem of rare events or drugs resulting in a small expectation that can cause volatility, a statistical shrinkage method is applied.
\begin{equation}
IC = log_{2} \frac{n_{xy}^{t} + 1/2}{E_{xy}^{t} + 1/2}
\label{eq:icshrink}
\end{equation}
The shrinkage adds a bias for the $IC$ towards zero when an event or drug is rare. The credibility intervals for the $IC$ are the logarithm of the solution to Eq. \ref{eq:ci} with $q=0.025$ and $q=0.975$.
\begin{equation}
\int_{0}^{\mu_{q}} \frac{(E_{xy}^{t}+1/2)^{n_{xy}^{t}+1/2}}{\Gamma(n_{xy}^{t}+1/2)} u^{(n_{xy}^{t}+1/2)-1} e^{-(n_{xy}^{t}+1/2)} du = q
\label{eq:ci}
\end{equation}
The above can find possible drug and event associations of interest for a given $t$, however, the authors suggest that general temporal patterns can be found by comparing the $IC$ of two different time periods. The follow-up period of primary interest is denoted by $u$ and the control time period by $v$. This removes event and drug relationships that just happen to occur more in certain sub-populations. The difference between the $IC$ for both time periods is,
\begin{equation}
log_{2} \frac{n_{xy}^{u}}{E_{xy}^{u}} - log_{2} \frac{n_{xy}^{v}}{E_{xy}^{v}}
\end{equation}
re-arranging and adding a shrinkage term gives,
\begin{equation}
IC_{\Delta} = log_{2} \frac{n_{xy}^{u}+1/2}{E_{xy}^{u*}+1/2}
\end{equation}
where
\begin{equation}
E_{xy}^{u*} = \frac{n_{xy}^{v}}{E_{xy}^{v}}.E_{xy}^{u}
\end{equation}
In this paper we calculate the $IC_{\Delta}$ as described above by contrasting the 30 day period after the drug prescription with a time period of $27$ to $21$ months prior to prescription. The OE ratio ranks medical events in descending order of the $IC_{\Delta}$, but removes some noise by filtering medical events with a positive $IC$ a month prior to the prescription or with a positive $IC$ on the day of prescription. As the THIN database does not contain information on the time during the day that a prescription is issued or a medical event is recorded, it is possible that medical events occurring on the same day as the prescription may be ADRs, so we investigate two different implementations of the OE ratio, the OE ratio 1 filters medical events with an $IC$ value a month prior to the prescription greater than the $IC$ value in the month after (not including the day of prescription) and OE ratio 2 filters every medical event with an $IC$ value a month prior to the prescription or on the day of prescription that is greater than the $IC$ value a month after.
\subsubsection{Algorithm 3}
Mining Unexpected Temporary Association Rules given the Antecedent (MUTARA) <|cite_start|> (Reference: Mining Unexpected Associations for Signalling Potential Adverse Drug Reactions from Administrative Health Databases: ) <|cite_end|> is a sequential pattern mining algorithm that finds medical events that occur more than expected within a user defined time period after a drug is first prescribed. MUTARA implements a measure of interestingness frequently used in sequential pattern mining known as Leverage. In the context of the medical databases the Leverage gives an indication of how temporally dependent a medical event is on the the presence of a drug, as it is the number of patients that have the medical event after the first time they are prescribed the drug of interest minus the expected number of patients who would have the medical event if the presence of the medical event was independent of the presence of the drug.
MUTARA was developed to be originally implemented on the Queensland Linked Data Set (QLDS) comprising of the Commonwealth Medicare Benefits Scheme (MBS), Pharmaceutical Benefits Scheme (PBS) and Queensland Hospital morbidity data. The database contained 2020 different diagnoses (medical events) and 758 distinct drug codes. The database contains limited information on a patient's medical history, as it only contains information while a patient is in hospital. When MUTARA was applied to the QLDS to detect ADR for older females prescribed alendronate the precision of the top ten events ($precision_{10}$) was $0.1$.
It is common for patients to have medical events repeated in their sequence and if a patient has a disease shortly before a prescription and then again within $T$ days of the prescription it is unlikely that the disease is an ADR. As a consequence, the authors of MUTARA decided to filter `predictable' medical events by removing any medical events that occurred $T$ days after the drug prescription and also occurred in a user defined time period prior to the drug prescription.
If we let $P(X \overset{T}{\hookrightarrow} Y)$ denote the probability of having event $Y$ `unpredictably' within $T$ days of drug $X$, then if event $Y$ occurs independent of drug $X$, $P(X \overset{T}{\hookrightarrow} Y)=P(X).P( \overset{T}{\hookrightarrow} Y)$. A large value for $P(X \overset{T}{\hookrightarrow} Y)-P(X).P( \overset{T}{\hookrightarrow} Y)$ would then suggest a dependency of $Y$ on drug $X$, indicating $Y$ as a possible ADR.
The above measure can be estimated by,
\begin{equation}
Unexlev = Supp(X \overset{T}{\hookrightarrow} Y)-\frac{Supp(X).Supp( \overset{T}{\hookrightarrow} Y)}{\mbox{Population}}
\end{equation}
Where,
\begin{itemize}
\item $Supp(X \overset{T}{\hookrightarrow} Y)$ - the number of patients in the database that have the medical event $Y$ within $T$ days of the first time being prescribed drug $X$ and do not have medical event $Y$ in a user defined time period prior to $X$.
\item $Supp(X)$ - the number of patients in the database that are prescribed the drug of interest.
\item $Supp( \overset{T}{\hookrightarrow} Y)$ - the number of patients who have never been prescribed drug $X$ and have medical event $Y$ in a randomly chosen time period of $T$ days plus $Supp(X \overset{T}{\hookrightarrow} Y)$.
\item Population - the total number of patients
\end{itemize}
MUTARA calculates the Unexlev for each medical event input by the user (in this paper we input any medical event that occurs within 30 days of the first time the drug is prescribed for at least one patient) and returns a ranked list in descending order of the Unexlev.
In this paper we use $T$=30 and chose to investigate two different time periods prior to the prescription that determine if a medical event is `predictable', 180 days and 60 days directly prior to the day the drug is first prescribed (MUTARA$_{180}$ and MUTARA$_{60}$ respectively).
\subsubsection{Algorithm 4}
Highlighting UTARs Negating TARs (HUNT) is a modified version of MUTARA <|cite_start|> (Reference: Signaling potential adverse drug reactions from administrative health databases: The work is motivated by real-world applications of detecting Adverse Drug Reactions (ADRs) from administrative health databases. ADRs are a leading cause of hospitalization and death worldwide. Almost all current postmarket ADR signaling techniques are based on spontaneous ADR case reports, which suffer from serious underreporting and latency. However, administrative health data are widely and routinely collected. They, especially linked together, would contain evidence of all ADRs. To signal unexpected and infrequent patterns characteristic of ADRs, we propose a domain-driven knowledge representation Unexpected Temporal Association Rule (UTAR), its interestingness measure, unexlev, and a mining algorithm MUTARA (Mining UTARs given the Antecedent). We then establish an improved algorithm, HUNT, for highlighting infrequent and unexpected patterns by comparing their ranks based on unexlev with those based on traditional leverage. Various experimental results on real-world data substantiate that both MUTARA and HUNT can signal suspected ADRs while traditional association mining techniques cannot. HUNT can reliably shortlist statistically significantly more ADRs than MUTARA (p=0.00078). HUNT, e.g., not only shortlists the drug alendronate associated with esophagitis as MUTARA does, but also shortlists alendronate with diarrhoea and vomiting for older (age ¿ 60) females. We also discuss signaling ADRs systematically by using HUNT.) <|cite_end|>, originally developed and implemented on the QLDS with previous results of a $precision_{10}=0.3$ when applied to detect ADR for older females prescribed alendronate and a $precision_{10}=0.1$ when applied for older males. MUTARA was found to have problems distinguishing between ADRs and therapeutic failures, as therapeutic failure medical events frequently occur after the drug is prescribed and have a high Unexlev value. Both therapeutic failures and ADRs have a high Unexlev, but unlike ADRs, therapeutic failure medical events should also occur prior to the drug prescription for some patients. The `predictable' filter should impact on therapeutic failure events but not ADRs, so the rank of therapeutic failure events can be reduced by comparing the Unexlev with the standard Leverage that calculates the temporal dependency of a medical event on a drug but does not filter `predictable' events, see Eq. \ref{eq:lev}.
\begin{equation}
\label{eq:lev}
Leverage = Supp(X \overset{T}{\rightarrow} Y)-\frac{Supp(X).Supp( \overset{T}{\rightarrow} Y)}{\mbox{Population}}
\end{equation}
Where,
\begin{itemize}
\item $Supp(X \overset{T}{\rightarrow} Y)$ - the number of patients in the database that have the medical event $Y$ within $T$ days of the first time being prescribed drug $X$.
\item $Supp(X)$ - the number of patients in the database that are prescribed the drug of interest.
\item $Supp( \overset{T}{\rightarrow} Y)$ - the number of patients who have never been prescribed drug $X$ and have medical event $Y$ in a randomly chosen time period of $T$ days plus $Supp(X \overset{T}{\rightarrow} Y)$.
\item Population - the total number of patients
\end{itemize}
HUNT calculates both the Unexlev and Leverage values, assigns each medical event two ranks ($RANK_{Unexlev}$ and $RANK_{Leverage}$) based on the Unexlev and Leverage values respectively in descending order and finally returns the list of medical events in decreasing order of the rank ratio ($RR$),
\begin{equation}
RR = \frac{RANK_{Leverage}}{RANK_{Unexlev}}
\end{equation}
In this paper HUNT is implemented with the same parameters as described for MUTARA. <|paper_end|> | [
"<|reference_start|> Quantitative signal detection using spontaneous ADR reporting: Quantitative methods are increasingly used to analyse spontaneous reports. We describe the core concepts behind the most common methods, the proportional reporting ratio (PRR), reporting odds ratio (ROR), information component (IC) and empirical Bayes geometric mean (EBGM). We discuss the role of Bayesian shrinkage in screening spontaneous reports, the importance of changes over time in screening the properties of the measures. Additionally we discuss three major areas of controversy and ongoing research: stratification, method evaluation and implementation. Finally we give some suggestions as to where emerging research is likely to lead. Copyright © 2009 John Wiley & Sons, Ltd. <|reference_end|>",
"<|reference_start|> Perspectives on the Use of Data Mining in Pharmacovigilance: <|reference_end|>",
"<|reference_start|> Disproportionality methods for pharmacovigilance in longitudinal observational databases: Data mining disproportionality methods (PRR, ROR, EBGM, IC, etc.) are commonly used to identify drug safety signals in spontaneous report system (SRS) databases. Newer data sources such as longitudinal observational databases (LOD) provide time-stamped patient-level information and overcome some of the SRS limitations such as an absence of the denominator, total number of patients who consume a drug, and limited temporal information. Application of the disproportionality methods to LODs has not been widely explored. The scale of the LOD data provides an interesting computational challenge. Larger health claims databases contain information on more than 50 million patients and each patient has records for up to 10 years. In this article we systematically explore the application of commonly used disproportionality methods to simulated and real LOD data. <|reference_end|>",
"<|reference_start|> Temporal pattern discovery in longitudinal electronic patient records: <|reference_end|>"
] | [
0,
4,
9,
11
] | {"<|cite_1|>": "ss-2536698", "<|cite_2|>": "ss-1691117", "<|cite_3|>": "ss-1691118", "<|cite_4|>": "ss-1690027", "<|cite_5|>": "ss-1691119", "<|cite_6|>": "ss-1958334", "<|cite_7|>": "ss-1691120", "<|cite_8|>": "ss-1691121", "<|cite_9|>": "ss-1691122", "<|cite_10|>": "ss-1691120", "<|cite_11|>": "ss-1691120", "<|cite_12|>": "ss-1158920", "<|cite_13|>": "ss-1691123", "<|cite_14|>": "ss-1691124", "<|cite_15|>": "ss-1691125"} |
2006.08273 | <|paper_start|> Title: Behind the Mask: A Computational Study of Anonymous' Presence on Twitter
Abstract: Behind the Mask: A Computational Study of Anonymous' Presence on Twitter: The hacktivist group Anonymous is unusual in its public-facing nature. Unlike other cybercriminal groups, which rely on secrecy and privacy for protection, Anonymous is prevalent on the social media site, Twitter. In this paper we re-examine some key findings reported in previous small-scale qualitative studies of the group using a large-scale computational analysis of Anonymous' presence on Twitter. We specifically refer to reports which reject the group's claims of leaderlessness, and indicate a fracturing of the group after the arrests of prominent members in 2011-2013. In our research, we present the first attempts to use machine learning to identify and analyse the presence of a network of over 20,000 Anonymous accounts spanning from 2008-2019 on the Twitter platform. In turn, this research utilises social network analysis (SNA) and centrality measures to examine the distribution of influence within this large network, identifying the presence of a small number of highly influential accounts. Moreover, we present the first study of tweets from some of the identified key influencer accounts and, through the use of topic modelling, demonstrate a similarity in overarching subjects of discussion between these prominent accounts. These findings provide robust, quantitative evidence to support the claims of smaller-scale, qualitative studies of the Anonymous collective.
Introduction
The hacker/hacktivist collective Anonymous is a group whose nebulous and contradictory ethos provide a source of both bafflement and fascination to those endeavoring to study them. Originating from the /b/ board of the image sharing site 4Chan, in which the board's participants interact anonymously with each other, this sharing of a singular ``Anon'' (a member of Anonymous) identity began to resonate with participants on this site, setting the stage for growth into the group we know today <|cite_start|> (Reference: `{Anonymous: Not available.) <|cite_end|>. A group famed for their campaigns (dubbed `Ops') targeting many organisations such as The Church of Scientology <|cite_start|> (Reference: `{Anonymous: Not available.) <|cite_end|>, the security firm HBGary <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|>, as well as ISIS, and the governments of the United States and Australia <|cite_start|> (Reference: `{Anonymous: Not available.) <|cite_end|>.
Interestingly, there are a significant number of Twitter accounts claiming some form of affiliation with Anonymous. From these accounts' interactions and posts, one can begin to establish a sense of how the structure and message of Anonymous as a group is presented. Accordingly, in this research we aim to use the findings of our large-scale study of Anonymous Twitter accounts to examine the contentions of smaller-scale, often interview-focused studies of the group. Such studies reject claims by the group to its nebulous, leaderless nature <|cite_start|> (Reference: Complex Contention: Analyzing Power Dynamics Within {Anonymous: Abstract Anonymous is notoriously elusive as the movement takes on radically different guises, constantly mutates, and traverses national borders and ideological divides. Since Anonymous is difficult to grasp with conventional social movement theory, this paper uses insights from complexity theory to analyze the movement’s evolution in general and its dynamics of power in particular. While participants in Anonymous radically reject hierarchy and leadership, dominant groups emerged at various points in the movement’s evolution. This paper aims to explain how such dominant groups emerge and concentrate power and how they subsequently dissolve and lose power. Drawing on ethnographic research as well as secondary sources, it identifies mechanisms of power concentration and diffusion within nominally horizontalist movements.) <|cite_end|> <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|>. Furthermore, they suggest that Anonymous fractured as the result of the arrests of key affiliates <|cite_start|> (Reference: `{Anonymous: Not available.) <|cite_end|> <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|>; a factor which again refutes the argument of a decentralised group structure.
To achieve this aim, this paper uses computational methods -- specifically machine learning classifiers, social network analysis (SNA), and topic modelling -- to investigate how the findings of qualitative studies of the group, whose results are largely derived from interviews and the examination of secondary sources (i.e., newspaper reports) <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|> <|cite_start|> (Reference: Complex Contention: Analyzing Power Dynamics Within {Anonymous: Abstract Anonymous is notoriously elusive as the movement takes on radically different guises, constantly mutates, and traverses national borders and ideological divides. Since Anonymous is difficult to grasp with conventional social movement theory, this paper uses insights from complexity theory to analyze the movement’s evolution in general and its dynamics of power in particular. While participants in Anonymous radically reject hierarchy and leadership, dominant groups emerged at various points in the movement’s evolution. This paper aims to explain how such dominant groups emerge and concentrate power and how they subsequently dissolve and lose power. Drawing on ethnographic research as well as secondary sources, it identifies mechanisms of power concentration and diffusion within nominally horizontalist movements.) <|cite_end|>, compare to a larger-scale study of Anonymous' actual behaviours on Twitter. Specifically, through our work, this paper:
\begin{itemize}
\item Identifies the presence of a sizeable network of Anonymous Twitter accounts -- containing more than 20,000 Anons -- using machine learning methods.
\item Uses SNA and centrality measures to map how influence is distributed across the Anonymous network, confirming the findings of smaller-scale studies (e.g., <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|>) that influence is generally the purview of a small number of members.
\item Examines how this network has changed over time relative to the arrests of key Anons in the 2011-2013 period <|cite_start|> (Reference: Complex Contention: Analyzing Power Dynamics Within {Anonymous: Abstract Anonymous is notoriously elusive as the movement takes on radically different guises, constantly mutates, and traverses national borders and ideological divides. Since Anonymous is difficult to grasp with conventional social movement theory, this paper uses insights from complexity theory to analyze the movement’s evolution in general and its dynamics of power in particular. While participants in Anonymous radically reject hierarchy and leadership, dominant groups emerged at various points in the movement’s evolution. This paper aims to explain how such dominant groups emerge and concentrate power and how they subsequently dissolve and lose power. Drawing on ethnographic research as well as secondary sources, it identifies mechanisms of power concentration and diffusion within nominally horizontalist movements.) <|cite_end|>. This longer-term study reveals a network that is seeing a rise in account inactivity, and a decrease in new members.
\item Compares the overarching tweet content of `key' influencer accounts using topic modelling, finding that each account's tweets follow similar lines of content. Again strengthening the findings of smaller-scale, qualitative studies.
\end{itemize}
From this, our research's large-scale study of Anonymous on Twitter concludes that, contrary to the group's claims, Anonymous displays a far less organisationally flat structure than the group aspires to. Such findings have been suggested by past smaller-scale studies, but as far as we know our work is the first large-scale study to follow a more systematic and computational approach.
It is expected that the insights provided into this group will be of general interest to researchers, cyber security professionals, members of the law enforcement community and the public, by increasing our overall understanding of amorphous hacktivist groups, and their use of social media.
Related Work
\label{Background} <|cite_start|> (Reference: Complex Contention: Analyzing Power Dynamics Within {Anonymous: Abstract Anonymous is notoriously elusive as the movement takes on radically different guises, constantly mutates, and traverses national borders and ideological divides. Since Anonymous is difficult to grasp with conventional social movement theory, this paper uses insights from complexity theory to analyze the movement’s evolution in general and its dynamics of power in particular. While participants in Anonymous radically reject hierarchy and leadership, dominant groups emerged at various points in the movement’s evolution. This paper aims to explain how such dominant groups emerge and concentrate power and how they subsequently dissolve and lose power. Drawing on ethnographic research as well as secondary sources, it identifies mechanisms of power concentration and diffusion within nominally horizontalist movements.) <|cite_end|>~( <|cite_start|> (Reference: Complex Contention: Analyzing Power Dynamics Within {Anonymous: Abstract Anonymous is notoriously elusive as the movement takes on radically different guises, constantly mutates, and traverses national borders and ideological divides. Since Anonymous is difficult to grasp with conventional social movement theory, this paper uses insights from complexity theory to analyze the movement’s evolution in general and its dynamics of power in particular. While participants in Anonymous radically reject hierarchy and leadership, dominant groups emerged at various points in the movement’s evolution. This paper aims to explain how such dominant groups emerge and concentrate power and how they subsequently dissolve and lose power. Drawing on ethnographic research as well as secondary sources, it identifies mechanisms of power concentration and diffusion within nominally horizontalist movements.) <|cite_end|>), in his qualitative analysis of Anonymous' power dynamics, described the group as follows:
\begin{quote}
Anonymous lacks a central authority, has no foundational ideology, does not represent categorically defined groups, does not consistently endorse ideologies, and has no fixed objective.
\end{quote}
This structure allows for the prevalence of multiple conflicting motivations and ideological goals, with the group advocating for nihilism and idealism, libertarianism and socialism, pranks (often referred to as `lulz' a corruption of LOL (laugh out loud), generally used to describe acts that sought humour at the expense of others) and activism, freedom of speech and the suppression of speech <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|> <|cite_start|> (Reference: `{Anonymous: Not available.) <|cite_end|>. As the group matured over time, these targets and goals began to be referred to as `Ops' by members of Anonymous <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|>. The inception of these Ops generally followed the nebulous structure that the group subscribed to, with the success of an Op directly tying in to its ability to attract enough Anonymous members to join it <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|>.
Instead of describing Anonymous using group-based theory, Beraldo used the phrase ``contentious brand'' -- a group defined, and singularly united, by the ``Anonymous signifier'' <|cite_start|> (Reference: Contentious Branding: Reassembling Social Movements Through Digital Mediators: This dissertation wishes to contribute to the sociological debate on protest movements by developing the notion of ‘contentious branding’ as a reflection emerging from the digital exploration of two empirical cases that challenge social movement theory: Occupy and Anonymous. The research was orientated by three interrelated questions operating at a methodological, empirical and theoretical level: How can digital research remediate the study of social movements? What sort of assemblages are articulated around the contentious brands Occupy and Anonymous? How does a branding perspective add to or amend traditional theories of social movements? The argument is built on a complexity-orientated epistemological background, interweaving insights derived from assemblage theory, actor-network theory, socio-semiotics and second-order cybernetics. The empirical research has been undertaken by means of digital techniques: Application Programming Interfaces of popular social media (mostly, Twitter and Facebook) have been pulled for data; the #Occupy and #Anonymous hashtags have been employed as research devices to set the limit of the analysis; and the datasets have been explored mostly by means of network analysis and computer-assisted content analysis techniques. The core contribution of the dissertation is to introduce and develop, within the field of social movement theory, the notion of ‘contentious branding’, to cope with the theoretical challenges highlighted by the empirical sections. A branding perspective on social movements not only fits these specific cases better: it intends to provide an epistemological and methodological device, to sustain a non-essentialist understanding of social movements, especially in the cases of digitalization of empirical phenomena and research methods.) <|cite_end|>. This notion of signification is central to Anonymous, from the Guy Fawkes mask, to the headless businessman, to the grandiose ``We are Legion'' style of communication <|cite_start|> (Reference: Contentious Branding: Reassembling Social Movements Through Digital Mediators: This dissertation wishes to contribute to the sociological debate on protest movements by developing the notion of ‘contentious branding’ as a reflection emerging from the digital exploration of two empirical cases that challenge social movement theory: Occupy and Anonymous. The research was orientated by three interrelated questions operating at a methodological, empirical and theoretical level: How can digital research remediate the study of social movements? What sort of assemblages are articulated around the contentious brands Occupy and Anonymous? How does a branding perspective add to or amend traditional theories of social movements? The argument is built on a complexity-orientated epistemological background, interweaving insights derived from assemblage theory, actor-network theory, socio-semiotics and second-order cybernetics. The empirical research has been undertaken by means of digital techniques: Application Programming Interfaces of popular social media (mostly, Twitter and Facebook) have been pulled for data; the #Occupy and #Anonymous hashtags have been employed as research devices to set the limit of the analysis; and the datasets have been explored mostly by means of network analysis and computer-assisted content analysis techniques. The core contribution of the dissertation is to introduce and develop, within the field of social movement theory, the notion of ‘contentious branding’, to cope with the theoretical challenges highlighted by the empirical sections. A branding perspective on social movements not only fits these specific cases better: it intends to provide an epistemological and methodological device, to sustain a non-essentialist understanding of social movements, especially in the cases of digitalization of empirical phenomena and research methods.) <|cite_end|> <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|>. And it is the freely available nature of these methods of identification that allow for this movement to be so amorphous. As highlighted by <|cite_start|> (Reference: The Group Element of Cybercrime: Types, Dynamics, and Criminal Operations: While cybercrime can often be an individual activity pursued by lone hackers, it has increasingly grown into a group activity, with networks across the world. This chapter critically examines the group element of cybercrime from several perspectives. It identifies the platforms that online groups---cybercriminal and otherwise---use to interact, and considers groups as both perpetrators and victims of cybercrime. A key novelty is the discovery of new types of online groups whose collective actions border on criminality. The chapter also analyzes how online cybercrime groups form, organize, and operate. It explores issues such as trust, motives, and means, and draws on several poignant examples, from Anonymous to LulzSec, to illustrate the arguments.) <|cite_end|>~( <|cite_start|> (Reference: The Group Element of Cybercrime: Types, Dynamics, and Criminal Operations: While cybercrime can often be an individual activity pursued by lone hackers, it has increasingly grown into a group activity, with networks across the world. This chapter critically examines the group element of cybercrime from several perspectives. It identifies the platforms that online groups---cybercriminal and otherwise---use to interact, and considers groups as both perpetrators and victims of cybercrime. A key novelty is the discovery of new types of online groups whose collective actions border on criminality. The chapter also analyzes how online cybercrime groups form, organize, and operate. It explores issues such as trust, motives, and means, and draws on several poignant examples, from Anonymous to LulzSec, to illustrate the arguments.) <|cite_end|>), the Anonymous Twitter account @GroupAnon stated:
``\textit{No, this is not the official \#Anonymous account. There is no official account. We have no central leadership. (Other than the FBI/NSA, joke)}''.
And the central reason there can be no official Anonymous account is that its membership is entirely reliant on the utilisation of the Anonymous brand, something freely available to all, rather than something that can be formally accessed by securing an approved membership.
A point of interest however is that Anonymous' position of having no central-authority has often been contradicted in practice. Although both Olson and Uitermark noted Anonymous' claims to a flat leadership structure, they both suggested that Anonymous' reality -- at least on IRC (internet relay chat) -- was far less a flat structure than one with a clear set of leaders <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|> <|cite_start|> (Reference: Complex Contention: Analyzing Power Dynamics Within {Anonymous: Abstract Anonymous is notoriously elusive as the movement takes on radically different guises, constantly mutates, and traverses national borders and ideological divides. Since Anonymous is difficult to grasp with conventional social movement theory, this paper uses insights from complexity theory to analyze the movement’s evolution in general and its dynamics of power in particular. While participants in Anonymous radically reject hierarchy and leadership, dominant groups emerged at various points in the movement’s evolution. This paper aims to explain how such dominant groups emerge and concentrate power and how they subsequently dissolve and lose power. Drawing on ethnographic research as well as secondary sources, it identifies mechanisms of power concentration and diffusion within nominally horizontalist movements.) <|cite_end|>. Uitermark and Olson described the existence of a `\#Command' room on IRC, in which these self appointed leaders -- without the knowledge of other Anons -- would plan the group's Ops. Moreover, a considerable change was noted in the group after the arrest of several members of the \#Command board (also members of the Anonymous splinter group LulzSec <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|>) in 2012. After this, <|cite_start|> (Reference: Complex Contention: Analyzing Power Dynamics Within {Anonymous: Abstract Anonymous is notoriously elusive as the movement takes on radically different guises, constantly mutates, and traverses national borders and ideological divides. Since Anonymous is difficult to grasp with conventional social movement theory, this paper uses insights from complexity theory to analyze the movement’s evolution in general and its dynamics of power in particular. While participants in Anonymous radically reject hierarchy and leadership, dominant groups emerged at various points in the movement’s evolution. This paper aims to explain how such dominant groups emerge and concentrate power and how they subsequently dissolve and lose power. Drawing on ethnographic research as well as secondary sources, it identifies mechanisms of power concentration and diffusion within nominally horizontalist movements.) <|cite_end|>~( <|cite_start|> (Reference: Complex Contention: Analyzing Power Dynamics Within {Anonymous: Abstract Anonymous is notoriously elusive as the movement takes on radically different guises, constantly mutates, and traverses national borders and ideological divides. Since Anonymous is difficult to grasp with conventional social movement theory, this paper uses insights from complexity theory to analyze the movement’s evolution in general and its dynamics of power in particular. While participants in Anonymous radically reject hierarchy and leadership, dominant groups emerged at various points in the movement’s evolution. This paper aims to explain how such dominant groups emerge and concentrate power and how they subsequently dissolve and lose power. Drawing on ethnographic research as well as secondary sources, it identifies mechanisms of power concentration and diffusion within nominally horizontalist movements.) <|cite_end|>) concluded that the group had fragmented considerably, stating that:
\begin{quote}
Anonymous lived on ...\ as a set of symbols and communication channels ...\ appropriated by a range of different groups for a range of different purposes.
\end{quote}
In turn, the group seemed to have lost the coherence present in its early days, leading to a drastic fall in notable operations and exploits <|cite_start|> (Reference: `{Anonymous: Not available.) <|cite_end|>.
An additional key point of Anonymous, as elucidated by <|cite_start|> (Reference: The Group Element of Cybercrime: Types, Dynamics, and Criminal Operations: While cybercrime can often be an individual activity pursued by lone hackers, it has increasingly grown into a group activity, with networks across the world. This chapter critically examines the group element of cybercrime from several perspectives. It identifies the platforms that online groups---cybercriminal and otherwise---use to interact, and considers groups as both perpetrators and victims of cybercrime. A key novelty is the discovery of new types of online groups whose collective actions border on criminality. The chapter also analyzes how online cybercrime groups form, organize, and operate. It explores issues such as trust, motives, and means, and draws on several poignant examples, from Anonymous to LulzSec, to illustrate the arguments.) <|cite_end|>~( <|cite_start|> (Reference: The Group Element of Cybercrime: Types, Dynamics, and Criminal Operations: While cybercrime can often be an individual activity pursued by lone hackers, it has increasingly grown into a group activity, with networks across the world. This chapter critically examines the group element of cybercrime from several perspectives. It identifies the platforms that online groups---cybercriminal and otherwise---use to interact, and considers groups as both perpetrators and victims of cybercrime. A key novelty is the discovery of new types of online groups whose collective actions border on criminality. The chapter also analyzes how online cybercrime groups form, organize, and operate. It explores issues such as trust, motives, and means, and draws on several poignant examples, from Anonymous to LulzSec, to illustrate the arguments.) <|cite_end|>), is that it is a group that has a strong ``public-facing nature''. Their work noted the presence of several Twitter profiles controlled by Anonymous affiliates, and the group further confirmed this via its willingness to engage with journalists; be it through IRC or even via interviews <|cite_start|> (Reference: We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security.) <|cite_end|>. By being public facing, Anonymous can easily capture more media attention, and bring new recruits to whichever Op the group, or a splinter of the group, is executing. <|paper_end|> | [
"<|reference_start|> We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security. <|reference_end|>",
"<|reference_start|> We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security. <|reference_end|>",
"<|reference_start|> We Are Anonymous: Inside the Hacker World of LulzSec, Anonymous, and the Global Cyber Insurgency: A thrilling, exclusive expose of the hacker collectives Anonymous and LulzSec. WE ARE ANONYMOUS is the first full account of how a loosely assembled group of hackers scattered across the globe formed a new kind of insurgency, seized headlines, and tortured the feds-and the ultimate betrayal that would eventually bring them down. Parmy Olson goes behind the headlines and into the world of Anonymous and LulzSec with unprecedented access, drawing upon hundreds of conversations with the hackers themselves, including exclusive interviews with all six core members of LulzSec. In late 2010, thousands of hacktivists joined a mass digital assault on the websites of VISA, MasterCard, and PayPal to protest their treatment of WikiLeaks. Other targets were wide ranging-the websites of corporations from Sony Entertainment and Fox to the Vatican and the Church of Scientology were hacked, defaced, and embarrassed-and the message was that no one was safe. Thousands of user accounts from pornography websites were released, exposing government employees and military personnel.Although some attacks were perpetrated by masses of users who were rallied on the message boards of 4Chan, many others were masterminded by a small, tight-knit group of hackers who formed a splinter group of Anonymous called LulzSec. The legend of Anonymous and LulzSec grew in the wake of each ambitious hack. But how were they penetrating intricate corporate security systems? Were they anarchists or activists? Teams or lone wolves? A cabal of skilled hackers or a disorganized bunch of kids?WE ARE ANONYMOUS delves deep into the internet's underbelly to tell the incredible full story of the global cyber insurgency movement, and its implications for the future of computer security. <|reference_end|>",
"<|reference_start|> `{Anonymous: Not available. <|reference_end|>"
] | [
10,
14,
20,
28
] | {"<|cite_1|>": "ss-906751", "<|cite_2|>": "ss-906751", "<|cite_3|>": "ss-906753", "<|cite_4|>": "ss-906751", "<|multi_cite_5_1|>": "ss-906749", "<|multi_cite_5_2|>": "ss-906753", "<|multi_cite_6_1|>": "ss-906751", "<|multi_cite_6_2|>": "ss-906753", "<|multi_cite_7_1|>": "ss-906753", "<|multi_cite_7_2|>": "ss-906749", "<|cite_8|>": "ss-906753", "<|cite_9|>": "ss-906749", "<|cite_23|>": "ss-906749", "<|cite_19|>": "ss-906749", "<|multi_cite_10_1|>": "ss-906753", "<|multi_cite_10_2|>": "ss-906751", "<|cite_11|>": "ss-906753", "<|cite_12|>": "ss-906753", "<|cite_13|>": "ss-1399096", "<|multi_cite_14_1|>": "ss-1399096", "<|multi_cite_14_2|>": "ss-906753", "<|cite_24|>": "arxiv-186779", "<|cite_20|>": "arxiv-186779", "<|multi_cite_15_1|>": "ss-906753", "<|multi_cite_15_2|>": "ss-906749", "<|cite_16|>": "ss-906753", "<|cite_25|>": "ss-906749", "<|cite_21|>": "ss-906749", "<|cite_17|>": "ss-906751", "<|cite_26|>": "arxiv-186779", "<|cite_22|>": "arxiv-186779", "<|cite_18|>": "ss-906753"} |
2404.16456-0 | <|paper_start|> Title: Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities
Abstract: Correlation-Decoupled Knowledge Distillation for Multimodal Sentiment Analysis with Incomplete Modalities: Multimodal sentiment analysis (MSA) aims to understand human sentiment through multimodal data. Most MSA efforts are based on the assumption of modality completeness. However, in real-world applications, some practical factors cause uncertain modality missingness, which drastically degrades the model's performance. To this end, we propose a Correlation-decoupled Knowledge Distillation (CorrKD) framework for the MSA task under uncertain missing modalities. Specifically, we present a sample-level contrastive distillation mechanism that transfers comprehensive knowledge containing cross-sample correlations to reconstruct missing semantics. Moreover, a category-guided prototype distillation mechanism is introduced to capture cross-category correlations using category prototypes to align feature distributions and generate favorable joint representations. Eventually, we design a response-disentangled consistency distillation strategy to optimize the sentiment decision boundaries of the student network through response disentanglement and mutual information maximization. Comprehensive experiments on three datasets indicate that our framework can achieve favorable improvements compared with several baselines.
Introduction
\label{sec:intro}
\setlength{\epigraphwidth}{0.45\textwidth}
\epigraph{\emph{``Correlations serve as the beacon through the fog of the missingness.''}}{\footnotesize--\emph{Lee \& Dicken}}
\vspace{-6pt}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{images/intro.pdf}
\caption{ Traditional model outputs correct prediction when inputting the sample with complete modalities, but incorrectly predicts the sample with missing modalities. We define two missing modality cases: (i) intra-modality missingness (\emph{i.e.}, the \textcolor{Magenta}{pink} areas) and (ii) inter-modality missingness (\emph{i.e.}, the \textcolor{Goldenrod}{yellow} area).}
\label{fig:example}
\end{figure}
Multimodal sentiment analysis (MSA) has attracted wide attention in recent years. Different from the traditional unimodal-based emotion recognition task <|cite_start|> (Reference: IEEE International Conference on Image Processing {(ICIP): ICIP is the premier international forum for the technological advances and research results in the fields of theoretical, experimental, and applied visual information processing. ICIP is a flagship conference of the IEEE Signal Processing Society (SPS) and attracts over 1,200 attendees annually. Research frontiers in fields ranging from traditional image processing applications to evolving multimedia and visual technologies are regularly advanced by results first reported in ICIP sessions and events.) <|cite_end|>, MSA understands and recognizes human emotions through multiple modalities, including language, audio, and visual <|cite_start|> (Reference: Towards multimodal sentiment analysis: harvesting opinions from the web: With more than 10,000 new videos posted online every day on social websites such as YouTube and Facebook, the internet is becoming an almost infinite source of information. One crucial challenge for the coming decade is to be able to harvest relevant information from this constant flow of multimodal data. This paper addresses the task of multimodal sentiment analysis, and conducts proof-of-concept experiments that demonstrate that a joint model that integrates visual, audio, and textual features can be effectively used to identify sentiment in Web videos. This paper makes three important contributions. First, it addresses for the first time the task of tri-modal sentiment analysis, and shows that it is a feasible task that can benefit from the joint exploitation of visual, audio and textual modalities. Second, it identifies a subset of audio-visual features relevant to sentiment analysis and present guidelines on how to integrate these features. Finally, it introduces a new dataset consisting of real online data, which will be useful for future research in this area.) <|cite_end|>.
Previous studies have shown that combining complementary information among different modalities facilitates the generation of more valuable joint multimodal representations <|cite_start|> (Reference: QuTI! Quantifying Text-Image Consistency in Multimodal Documents: The World Wide Web and social media platforms have become popular sources for news and information. Typically, multimodal information, e.g., image and text is used to convey information more effectively and to attract attention. While in most cases image content is decorative or depicts additional information, it has also been leveraged to spread misinformation and rumors in recent years. In this paper, we present a Web-based demo application that automatically quantifies the cross-modal relations of entities (persons, locations, and events) in image and text. The applications are manifold. For example, the system can help users to explore multimodal articles more efficiently, or can assist human assessors and fact-checking efforts in the verification of the credibility of news stories, tweets, or other multimodal documents.) <|cite_end|> <|cite_start|> (Reference: Web table retrieval using multimodal deep learning: We address the web table retrieval task, aiming to retrieve and rank web tables as whole answers to a given information need. To this end, we formally define web tables as multimodal objects. We then suggest a neural ranking model, termed MTR, which makes a novel use of Gated Multimodal Units (GMUs) to learn a joint-representation of the query and the different table modalities. We further enhance this model with a co-learning approach which utilizes automatically learned query-independent and query-dependent "helper'' labels. We evaluate the proposed solution using both ad hoc queries (WikiTables) and natural language questions (GNQtables). Overall, we demonstrate that our approach surpasses the performance of previously studied state-of-the-art baselines.) <|cite_end|>.
Under the deep learning paradigm <|cite_start|> (Reference: AIDE: A Vision-Driven Multi-View, Multi-Modal, Multi-Tasking Dataset for Assistive Driving Perception: Driver distraction has become a significant cause of severe traffic accidents over the past decade. Despite the growing development of vision-driven driver monitoring systems, the lack of comprehensive perception datasets restricts road safety and traffic security. In this paper, we present an AssIstive Driving pErception dataset (AIDE) that considers context information both inside and outside the vehicle in naturalistic scenarios. AIDE facilitates holistic driver monitoring through three distinctive characteristics, including multi-view settings of driver and scene, multi-modal annotations of face, body, posture, and gesture, and four pragmatic task designs for driving understanding. To thoroughly explore AIDE, we provide experimental benchmarks on three kinds of baseline frameworks via extensive methods. Moreover, two fusion strategies are introduced to give new insights into learning effective multi-stream/modal representations. We also systematically investigate the importance and rationality of the key components in AIDE and benchmarks. The project link is https://github.com/ydk122024/AIDE.) <|cite_end|> <|cite_start|> (Reference: Spatio-Temporal Domain Awareness for Multi-Agent Collaborative Perception: Multi-agent collaborative perception as a potential application for vehicle-to-everything communication could significantly improve the perception performance of autonomous vehicles over single-agent perception. However, several challenges remain in achieving pragmatic information sharing in this emerging research. In this paper, we propose SCOPE, a novel collaborative perception framework that aggregates the spatio-temporal awareness characteristics across on-road agents in an end-to-end manner. Specifically, SCOPE has three distinct strengths: i) it considers effective semantic cues of the temporal context to enhance current representations of the target agent; ii) it aggregates perceptually critical spatial information from heterogeneous agents and overcomes localization errors via multi-scale feature interactions; iii) it integrates multi-source representations of the target agent based on their complementary contributions by an adaptive fusion paradigm. To thoroughly evaluate SCOPE, we consider both real-world and simulated scenarios of collaborative 3D object detection tasks on three datasets. Extensive experiments demonstrate the superiority of our approach and the necessity of the proposed components.) <|cite_end|> <|cite_start|> (Reference: What2comm: Towards Communication-efficient Collaborative Perception via Feature Decoupling: Multi-agent collaborative perception has received increasing attention recently as an emerging application in driving scenarios. Despite advancements in previous approaches, challenges remain due to redundant communication patterns and vulnerable collaboration processes. To address these issues, we propose What2comm, an end-to-end collaborative perception framework to achieve a trade-off between perception performance and communication bandwidth. Our novelties lie in three aspects. First, we design an efficient communication mechanism based on feature decoupling to transmit exclusive and common feature maps among heterogeneous agents to provide perceptually holistic messages. Secondly, a spatio-temporal collaboration module is introduced to integrate complementary information from collaborators and temporal ego cues, leading to a robust collaboration procedure against transmission delay and localization errors. Ultimately, we propose a common-aware fusion strategy to refine final representations with informative common features. Comprehensive experiments in real-world and simulated scenarios demonstrate the effectiveness of What2comm.) <|cite_end|> <|cite_start|> (Reference: CPR-CLIP: Multimodal Pre-training for Composite Error Recognition in CPR Training: The expensive cost of the medical skill training paradigm hinders the development of medical education, which has attracted widespread attention in the intelligent signal processing community. To address the issue of composite error action recognition in Cardiopulmonary Resuscitation (CPR) training, this letter proposes a multimodal pre-training framework named CPR-CLIP based on prompt engineering. Specifically, we design three prompts to fuse multiple errors naturally on the semantic level and then align linguistic and visual features via the contrastive pre-training loss. Extensive experiments verify the effectiveness of the CPR-CLIP. Ultimately, the CPR-CLIP is encapsulated to an electronic assistant, and four doctors are recruited for evaluation. Nearly four times efficiency improvement is observed in comparative experiments, which demonstrates the practicality of the system. We hope this work brings new insights to the intelligent medical skill training and signal processing communities simultaneously.) <|cite_end|> <|cite_start|> (Reference: TSA-Net: Tube Self-Attention Network for Action Quality Assessment: In recent years, assessing action quality from videos has attracted growing attention in computer vision community and human computer interaction. Most existing approaches usually tackle this problem by directly migrating the model from action recognition tasks, which ignores the intrinsic differences within the feature map such as foreground and background information. To address this issue, we propose a Tube Self-Attention Network (TSA-Net) for action quality assessment (AQA). Specifically, we introduce a single object tracker into AQA and propose the Tube Self-Attention Module (TSA), which can efficiently generate rich spatio-temporal contextual information by adopting sparse feature interactions. The TSA module is embedded in existing video networks to form TSA-Net. Overall, our TSA-Net is with the following merits: 1) High computational efficiency, 2) High flexibility, and 3) The state-of-the art performance. Extensive experiments are conducted on popular action quality assessment datasets including AQA-7 and MTL-AQA. Besides, a dataset named Fall Recognition in Figure Skating (FR-FS) is proposed to explore the basic action assessment in the figure skating scene.) <|cite_end|> <|cite_start|> (Reference: MISS: A Generative Pretraining and Finetuning Approach for Med-VQA: Medical visual question answering (VQA) is a challenging multimodal task, where Vision-Language Pre-training (VLP) models can effectively improve the generalization performance. However, most methods in the medical field treat VQA as an answer classification task which is difficult to transfer to practical application scenarios. Additionally, due to the privacy of medical images and the expensive annotation process, large-scale medical image-text pairs datasets for pretraining are severely lacking. In this paper, we propose a large-scale MultI-task Self-Supervised learning based framework (MISS) for medical VQA tasks. Unlike existing methods, we treat medical VQA as a generative task. We unify the text encoder and multimodal encoder and align image-text features through multi-task learning. Furthermore, we propose a Transfer-and-Caption method that extends the feature space of single-modal image datasets using Large Language Models (LLMs), enabling those traditional medical vision field task data to be applied to VLP. Experiments show that our method achieves excellent results with fewer multimodal datasets and demonstrates the advantages of generative VQA models.) <|cite_end|> <|cite_start|> (Reference: Towards Simultaneous Segmentation of Liver Tumors and Intrahepatic Vessels via Cross-attention Mechanism: Accurate visualization of liver tumors and their surrounding blood vessels is essential for noninvasive diagnosis and prognosis prediction of tumors. In medical image segmentation, there is still a lack of in-depth research on the simultaneous segmentation of liver tumors and peritumoral blood vessels. To this end, we collect the first liver tumor, and vessel segmentation benchmark datasets containing 52 portal vein phase computed tomography images with liver, liver tumor, and vessel annotations. In this case, we propose a 3D U-shaped Cross-Attention Network (UCA-Net) that utilizes a tailored cross-attention mechanism instead of the traditional skip connection to effectively model the encoder and decoder feature. Specifically, the UCA-Net uses a channel-wise cross-attention module to reduce the semantic gap between encoder and decoder and a slice-wise cross-attention module to enhance the contextual semantic learning ability among distinct slices. Experimental results show that the proposed UCA-Net can accurately segment 3D medical images and achieve state-of-the-art performance on the liver tumor and intrahepatic vessel segmentation task.) <|cite_end|>,
numerous studies assuming the availability of all modalities during both training and inference stages <|cite_start|> (Reference: Misa: Modality-invariant and-specific representations for multimodal sentiment analysis: Multimodal Sentiment Analysis is an active area of research that leverages multimodal signals for affective understanding of user-generated videos. The predominant approach, addressing this task, has been to develop sophisticated fusion techniques. However, the heterogeneous nature of the signals creates distributional modality gaps that pose significant challenges. In this paper, we aim to learn effective modality representations to aid the process of fusion. We propose a novel framework, MISA, which projects each modality to two distinct subspaces. The first subspace is modality-invariant, where the representations across modalities learn their commonalities and reduce the modality gap. The second subspace is modality-specific, which is private to each modality and captures their characteristic features. These representations provide a holistic view of the multimodal data, which is used for fusion that leads to task predictions. Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models. We also consider the task of Multimodal Humor Detection and experiment on the recently proposed UR_FUNNY dataset. Here too, our model fares better than strong baselines, establishing MISA as a useful multimodal framework.) <|cite_end|> <|cite_start|> (Reference: Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis: Representation Learning is a significant and challenging task in multimodal learning. Effective modality representations should contain two parts of characteristics: the consistency and the difference. Due to the unified multimodal annotation, existing methods are restricted in capturing differentiated information. However, additional uni-modal annotations are high time- and labor-cost. In this paper, we design a label generation module based on the self-supervised learning strategy to acquire independent unimodal supervisions. Then, joint training the multi-modal and uni-modal tasks to learn the consistency and difference, respectively. Moreover, during the training stage, we design a weight-adjustment strategy to balance the learning progress among different subtasks. That is to guide the subtasks to focus on samples with a larger difference between modality supervisions. Last, we conduct extensive experiments on three public multimodal baseline datasets. The experimental results validate the reliability and stability of auto-generated unimodal supervisions. On MOSI and MOSEI datasets, our method surpasses the current state-of-the-art methods. On the SIMS dataset, our method achieves comparable performance than human-annotated unimodal labels. The full codes are available at https://github.com/thuiar/Self-MM.) <|cite_end|> <|cite_start|> (Reference: Disentangled Representation Learning for Multimodal Emotion Recognition: Multimodal emotion recognition aims to identify human emotions from text, audio, and visual modalities. Previous methods either explore correlations between different modalities or design sophisticated fusion strategies. However, the serious problem is that the distribution gap and information redundancy often exist across heterogeneous modalities, resulting in learned multimodal representations that may be unrefined. Motivated by these observations, we propose a Feature-Disentangled Multimodal Emotion Recognition (FDMER) method, which learns the common and private feature representations for each modality. Specifically, we design the common and private encoders to project each modality into modality-invariant and modality-specific subspaces, respectively. The modality-invariant subspace aims to explore the commonality among different modalities and reduce the distribution gap sufficiently. The modality-specific subspaces attempt to enhance the diversity and capture the unique characteristics of each modality. After that, a modality discriminator is introduced to guide the parameter learning of the common and private encoders in an adversarial manner. We achieve the modality consistency and disparity constraints by designing tailored losses for the above subspaces. Furthermore, we present a cross-modal attention fusion module to learn adaptive weights for obtaining effective multimodal representations. The final representation is used for different downstream tasks. Experimental results show that the FDMER outperforms the state-of-the-art methods on two multimodal emotion recognition benchmarks. Moreover, we further verify the effectiveness of our model via experiments on the multimodal humor detection task.) <|cite_end|> <|cite_start|> (Reference: Learning Modality-Specific and-Agnostic Representations for Asynchronous Multimodal Language Sequences: Understanding human behaviors and intents from videos is a challenging task. Video flows usually involve time-series data from different modalities, such as natural language, facial gestures, and acoustic information. Due to the variable receiving frequency for sequences from each modality, the collected multimodal streams are usually unaligned. For multimodal fusion of asynchronous sequences, the existing methods focus on projecting multiple modalities into a common latent space and learning the hybrid representations, which neglects the diversity of each modality and the commonality across different modalities. Motivated by this observation, we propose a Multimodal Fusion approach for learning modality-Specific and modality-Agnostic representations (MFSA) to refine multimodal representations and leverage the complementarity across different modalities. Specifically, a predictive self-attention module is used to capture reliable contextual dependencies and enhance the unique features over the modality-specific spaces. Meanwhile, we propose a hierarchical cross-modal attention module to explore the correlations between cross-modal elements over the modality-agnostic space. In this case, a double-discriminator strategy is presented to ensure the production of distinct representations in an adversarial manner. Eventually, the modality-specific and -agnostic multimodal representations are used together for downstream tasks. Comprehensive experiments on three multimodal datasets clearly demonstrate the superiority of our approach.) <|cite_end|> <|cite_start|> (Reference: Emotion Recognition for Multiple Context Awareness: ) <|cite_end|> <|cite_start|> (Reference: Decoupled Multimodal Distilling for Emotion Recognition: Human multimodal emotion recognition (MER) aims to perceive human emotions via language, visual and acoustic modalities. Despite the impressive performance of previous MER approaches, the inherent multimodal heterogeneities still haunt and the contribution of different modalities varies significantly. In this work, we mitigate this issue by proposing a decoupled multimodal distillation (DMD) approach that facilitates flexible and adaptive crossmodal knowledge distillation, aiming to enhance the discriminative features of each modality. Specially, the representation of each modality is decoupled into two parts, i.e., modality-irrelevant/-exclusive spaces, in a self-regression manner. DMD utilizes a graph distillation unit (GD-Unit) for each decoupled part so that each GD can be performed in a more specialized and effective manner. A GD-Unit consists of a dynamic graph where each vertice represents a modality and each edge indicates a dynamic knowledge distillation. Such GD paradigm provides a flexible knowledge transfer manner where the distillation weights can be automatically learned, thus enabling diverse crossmodal knowledge transfer patterns. Experimental results show DMD consistently obtains superior performance than state-of-the-art MER methods. Visualization results show the graph edges in DMD exhibit meaningful distributional patterns w.r.t. the modality-irrelevant/-exclusive feature spaces. Codes are released at \url{https://github.com/mdswyz/DMD}.) <|cite_end|> <|cite_start|> (Reference: Robust Emotion Recognition in Context Debiasing: Context-aware emotion recognition (CAER) has recently boosted the practical applications of affective computing techniques in unconstrained environments. Mainstream CAER methods invariably extract ensemble representations from diverse contexts and subject-centred characteristics to perceive the target person's emotional state. Despite advancements, the biggest challenge remains due to context bias interference. The harmful bias forces the models to rely on spurious correlations between background contexts and emotion labels in likelihood estimation, causing severe performance bottlenecks and confounding valuable context priors. In this paper, we propose a counterfactual emotion inference (CLEF) framework to address the above issue. Specifically, we first formulate a generalized causal graph to decouple the causal relationships among the variables in CAER. Following the causal graph, CLEF introduces a non-invasive context branch to capture the adverse direct effect caused by the context bias. During the inference, we eliminate the direct context effect from the total causal effect by comparing factual and counterfactual outcomes, resulting in bias mitigation and robust prediction. As a model-agnostic framework, CLEF can be readily integrated into existing methods, bringing consistent performance gains.) <|cite_end|> <|cite_start|> (Reference: Context De-confounded Emotion Recognition: Context-Aware Emotion Recognition (CAER) is a crucial and challenging task that aims to perceive the emotional states of the target person with contextual information. Recent approaches invariably focus on designing sophisticated architectures or mechanisms to extract seemingly meaningful representations from subjects and contexts. However, a long-overlooked issue is that a context bias in existing datasets leads to a significantly unbalanced distribution of emotional states among different context scenarios. Concretely, the harmful bias is a confounder that misleads existing models to learn spurious correlations based on conventional likelihood estimation, significantly limiting the models' performance. To tackle the issue, this paper provides a causality-based perspective to disentangle the models from the impact of such bias, and formulate the causalities among variables in the CAER task via a tailored causal graph. Then, we propose a Contextual Causal Intervention Module (CCIM) based on the backdoor adjustment to de-confound the confounder and exploit the true causal effect for model training. CCIM is plug-in and model-agnostic, which improves diverse state-of-the-art approaches by considerable margins. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our CCIM and the significance of causal insight.) <|cite_end|> <|cite_start|> (Reference: Target and source modality co-reinforcement for emotion understanding from asynchronous multimodal sequences: ) <|cite_end|> <|cite_start|> (Reference: Towards Multimodal Human Intention Understanding Debiasing via Subject-Deconfounding: Multimodal intention understanding (MIU) is an indispensable component of human expression analysis (e.g., sentiment or humor) from heterogeneous modalities, including visual postures, linguistic contents, and acoustic behaviors. Existing works invariably focus on designing sophisticated structures or fusion strategies to achieve impressive improvements. Unfortunately, they all suffer from the subject variation problem due to data distribution discrepancies among subjects. Concretely, MIU models are easily misled by distinct subjects with different expression customs and characteristics in the training data to learn subject-specific spurious correlations, significantly limiting performance and generalizability across uninitiated subjects.Motivated by this observation, we introduce a recapitulative causal graph to formulate the MIU procedure and analyze the confounding effect of subjects. Then, we propose SuCI, a simple yet effective causal intervention module to disentangle the impact of subjects acting as unobserved confounders and achieve model training via true causal effects. As a plug-and-play component, SuCI can be widely applied to most methods that seek unbiased predictions. Comprehensive experiments on several MIU benchmarks clearly demonstrate the effectiveness of the proposed module.) <|cite_end|> <|cite_start|> (Reference: Towards Multimodal Sentiment Analysis Debiasing via Bias Purification: Multimodal Sentiment Analysis (MSA) aims to understand human intentions by integrating emotion-related clues from diverse modalities, such as visual, language, and audio. Unfortunately, the current MSA task invariably suffers from unplanned dataset biases, particularly multimodal utterance-level label bias and word-level context bias. These harmful biases potentially mislead models to focus on statistical shortcuts and spurious correlations, causing severe performance bottlenecks. To alleviate these issues, we present a Multimodal Counterfactual Inference Sentiment (MCIS) analysis framework based on causality rather than conventional likelihood. Concretely, we first formulate a causal graph to discover harmful biases from already-trained vanilla models. In the inference phase, given a factual multimodal input, MCIS imagines two counterfactual scenarios to purify and mitigate these biases. Then, MCIS can make unbiased decisions from biased observations by comparing factual and counterfactual outcomes. We conduct extensive experiments on several standard MSA benchmarks. Qualitative and quantitative results show the effectiveness of the proposed framework.) <|cite_end|> <|cite_start|> (Reference: Contextual and Cross-modal Interaction for Multi-modal Speech Emotion Recognition: Speech emotion recognition combining linguistic content and audio signals in the dialog is a challenging task. Nevertheless, previous approaches have failed to explore emotion cues in contextual interactions and ignored the long-range dependencies between elements from different modalities. To tackle the above issues, this letter proposes a multimodal speech emotion recognition method using audio and text data. We first present a contextual transformer module to introduce contextual information via embedding the previous utterances between interlocutors, which enhances the emotion representation of the current utterance. Then, the proposed cross-modal transformer module focuses on the interactions between text and audio modalities, adaptively promoting the fusion from one modality to another. Furthermore, we construct associative topological relation over mini-batch and learn the association between deep fused features with graph convolutional network. Experimental results on the IEMOCAP and MELD datasets show that our method outperforms current state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Text-oriented Modality Reinforcement Network for Multimodal Sentiment Analysis from Unaligned Multimodal Sequences: Multimodal Sentiment Analysis (MSA) aims to mine sentiment information from text, visual, and acoustic modalities. Previous works have focused on representation learning and feature fusion strategies. However, most of these efforts ignored the disparity in the semantic richness of different modalities and treated each modality in the same manner. That may lead to strong modalities being neglected and weak modalities being overvalued. Motivated by these observations, we propose a Text-oriented Modality Reinforcement Network (TMRN), which focuses on the dominance of the text modality in MSA. More specifically, we design a Text-Centered Cross-modal Attention (TCCA) module to make full interaction for text/acoustic and text/visual pairs, and a Text-Gated Self-Attention (TGSA) module to guide the self-reinforcement of the other two modalities. Furthermore, we present an adaptive fusion mechanism to decide the proportion of different modalities involved in the fusion process. Finally, we combine the feature matrices into vectors to get the final representation for the downstream tasks. Experimental results show that our TMRN outperforms the state-of-the-art methods on two MSA benchmarks.) <|cite_end|>.
Nevertheless, this assumption often fails to align with real-world scenarios, where factors such as background noise, sensor constraints, and privacy concerns may lead to uncertain modality missingness issues.
Modality missingness can significantly impair the effectiveness of well-trained models based on complete modalities.
For instance, as shown in \Cref{fig:example}, the entire visual modality is missing, and some frame-level features in the language and audio modalities are missing, leading to an incorrect sentiment prediction.
In recent years, many works <|cite_start|> (Reference: GCNet: Graph Completion Network for Incomplete Multimodal Learning in Conversation: Conversations have become a critical data format on social media platforms. Understanding conversation from emotion, content and other aspects also attracts increasing attention from researchers due to its widespread application in human-computer interaction. In real-world environments, we often encounter the problem of incomplete modalities, which has become a core issue of conversation understanding. To address this problem, researchers propose various methods. However, existing approaches are mainly designed for individual utterances rather than conversational data, which cannot fully exploit temporal and speaker information in conversations. To this end, we propose a novel framework for incomplete multimodal learning in conversations, called "Graph Complete Network (GCNet)", filling the gap of existing works. Our GCNet contains two well-designed graph neural network-based modules, "Speaker GNN" and "Temporal GNN", to capture temporal and speaker dependencies. To make full use of complete and incomplete data, we jointly optimize classification and reconstruction tasks in an end-to-end manner. To verify the effectiveness of our method, we conduct experiments on three benchmark conversational datasets. Experimental results demonstrate that our GCNet is superior to existing state-of-the-art approaches in incomplete multimodal learning. Code is available at https://github.com/zeroQiaoba/GCNet.) <|cite_end|> <|cite_start|> (Reference: Distribution-Consistent Modal Recovering for Incomplete Multimodal Learning: Recovering missing modality is popular in incomplete multimodal learning because it usually benefits downstream tasks. However, the existing methods often directly estimate missing modalities from the observed ones by deep neural networks, lacking consideration of the distribution gap between modalities, resulting in the inconsistency of distributions between the recovered and the true data. To mitigate this issue, in this work, we propose a novel recovery paradigm, Distribution-Consistent Modal Recovering (DiCMoR), to transfer the distributions from available modalities to missing modalities, which thus maintains the distribution consistency of recovered data. In particular, we design a class-specific flow based modality recovery method to transform cross-modal distributions on the condition of sample class, which could well predict a distribution-consistent space for missing modality by virtue of the invertibility and exact density estimation of normalizing flow. The generated data from the predicted distribution is integrated with available modalities for the task of classification. Experiments show that DiCMoR gains superior performances and is more robust than existing state-of-the-art methods under various missing patterns. Visualization results show that the distribution gaps between recovered modalities and missing modalities are mitigated. Codes are released at https://github.com/mdswyz/DiCMoR.) <|cite_end|> <|cite_start|> (Reference: Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities: Multimodal sentiment analysis is a core research area that studies speaker sentiment expressed from the language, visual, and acoustic modalities. The central challenge in multimodal learning involves inferring joint representations that can process and relate information from these modalities. However, existing work learns joint representations by requiring all modalities as input and as a result, the learned representations may be sensitive to noisy or missing modalities at test time. With the recent success of sequence to sequence (Seq2Seq) models in machine translation, there is an opportunity to explore new ways of learning joint representations that may not require all input modalities at test time. In this paper, we propose a method to learn robust joint representations by translating between modalities. Our method is based on the key insight that translation from a source to a target modality provides a method of learning joint representations using only the source modality as input. We augment modality translations with a cycle consistency loss to ensure that our joint representations retain maximal information from all modalities. Once our translation model is trained with paired multimodal data, we only need data from the source modality at test time for final sentiment prediction. This ensures that our model remains robust from perturbations or missing information in the other modalities. We train our model with a coupled translation-prediction objective and it achieves new state-of-the-art results on multimodal sentiment analysis datasets: CMU-MOSI, ICT-MMMO, and YouTube. Additional experiments show that our model learns increasingly discriminative joint representations with more input modalities while maintaining robustness to missing or perturbed modalities.) <|cite_end|> <|cite_start|> (Reference: TransModality: An End2End Fusion Method with Transformer for Multimodal Sentiment Analysis: Multimodal sentiment analysis is an important research area that predicts speaker's sentiment tendency through features extracted from textual, visual and acoustic modalities. The central challenge is the fusion method of the multimodal information. A variety of fusion methods have been proposed, but few of them adopt end-to-end translation models to mine the subtle correlation between modalities. Enlightened by recent success of Transformer in the area of machine translation, we propose a new fusion method, TransModality, to address the task of multimodal sentiment analysis. We assume that translation between modalities contributes to a better joint representation of speaker's utterance. With Transformer, the learned features embody the information both from the source modality and the target modality. We validate our model on multiple multimodal datasets: CMU-MOSI, MELD, IEMOCAP. The experiments show that our proposed method achieves the state-of-the-art performance.) <|cite_end|> <|cite_start|> (Reference: Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities: Multimodal sentiment analysis has been studied under the assumption that all modalities are available. However, such a strong assumption does not always hold in practice, and most of multimodal fusion models may fail when partial modalities are missing. Several works have addressed the missing modality problem; but most of them only considered the single modality missing case, and ignored the practically more general cases of multiple modalities missing. To this end, in this paper, we propose a Tag-Assisted Transformer Encoder (TATE) network to handle the problem of missing uncertain modalities. Specifically, we design a tag encoding module to cover both the single modality and multiple modalities missing cases, so as to guide the network's attention to those missing modalities. Besides, we adopt a new space projection pattern to align common vectors. Then, a Transformer encoder-decoder network is utilized to learn the missing modality features. At last, the outputs of the Transformer encoder are used for the final sentiment classification. Extensive experiments are conducted on CMU-MOSI and IEMOCAP datasets, showing that our method can achieve significant improvements compared with several baselines.) <|cite_end|> <|cite_start|> (Reference: Modality translation-based multimodal sentiment analysis under uncertain missing modalities: ) <|cite_end|> <|cite_start|> (Reference: Towards Robust Multimodal Sentiment Analysis under Uncertain Signal Missing: Multimodal Sentiment Analysis (MSA) has attracted widespread research attention recently. Most MSA studies are based on the assumption of signal completeness. However, many inevitable factors in real applications lead to uncertain signal missing, causing significant degradation of model performance. To this end, we propose a Robust multimodal Missing Signal Framework (RMSF) to handle the problem of uncertain signal missing for MSA tasks and can be generalized to other multimodal patterns. Specifically, a hierarchical crossmodal interaction module in RMSF exploits potential complementary semantics among modalities via coarse- and fine-grained crossmodal attention. Furthermore, we design an adaptive feature refinement module to enhance the beneficial semantics of modalities and filter redundant features. Finally, we propose a knowledge-integrated self-distillation module that enables dynamic knowledge integration and bidirectional knowledge transfer within a single network to precisely reconstruct missing semantics. Comprehensive experiments are conducted on two datasets, indicating that RMSF significantly improves MSA performance under both uncertain missing-signal and complete-signal cases.) <|cite_end|> <|cite_start|> (Reference: A Unified Self-Distillation Framework for Multimodal Sentiment Analysis with Uncertain Missing Modalities: Multimodal Sentiment Analysis (MSA) has attracted widespread research attention recently. Most MSA studies are based on the assumption of modality completeness. However, many inevitable factors in real-world scenarios lead to uncertain missing modalities, which invalidate the fixed multimodal fusion approaches. To this end, we propose a Unified multimodal Missing modality self-Distillation Framework (UMDF) to handle the problem of uncertain missing modalities in MSA. Specifically, a unified self-distillation mechanism in UMDF drives a single network to automatically learn robust inherent representations from the consistent distribution of multimodal data. Moreover, we present a multi-grained crossmodal interaction module to deeply mine the complementary semantics among modalities through coarse- and fine-grained crossmodal attention. Eventually, a dynamic feature integration module is introduced to enhance the beneficial semantics in incomplete modalities while filtering the redundant information therein to obtain a refined and robust multimodal representation. Comprehensive experiments on three datasets demonstrate that our framework significantly improves MSA performance under both uncertain missing-modality and complete-modality testing conditions.) <|cite_end|>attempt to address the problem of missing modalities in MSA.
As a typical example, MCTN <|cite_start|> (Reference: Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities: Multimodal sentiment analysis is a core research area that studies speaker sentiment expressed from the language, visual, and acoustic modalities. The central challenge in multimodal learning involves inferring joint representations that can process and relate information from these modalities. However, existing work learns joint representations by requiring all modalities as input and as a result, the learned representations may be sensitive to noisy or missing modalities at test time. With the recent success of sequence to sequence (Seq2Seq) models in machine translation, there is an opportunity to explore new ways of learning joint representations that may not require all input modalities at test time. In this paper, we propose a method to learn robust joint representations by translating between modalities. Our method is based on the key insight that translation from a source to a target modality provides a method of learning joint representations using only the source modality as input. We augment modality translations with a cycle consistency loss to ensure that our joint representations retain maximal information from all modalities. Once our translation model is trained with paired multimodal data, we only need data from the source modality at test time for final sentiment prediction. This ensures that our model remains robust from perturbations or missing information in the other modalities. We train our model with a coupled translation-prediction objective and it achieves new state-of-the-art results on multimodal sentiment analysis datasets: CMU-MOSI, ICT-MMMO, and YouTube. Additional experiments show that our model learns increasingly discriminative joint representations with more input modalities while maintaining robustness to missing or perturbed modalities.) <|cite_end|>guarantees the model's robustness to the missing modality case by learning a joint representation through cyclic translation from the source modality to the target modality.
However, these methods suffer from the following limitations:
(\romannumeral1) inadequate interactions based on individual samples lack the mining of holistically structured semantics. (\romannumeral2) Failure to model cross-category correlations leads to loss of sentiment-relevant information and confusing distributions among categories. (\romannumeral3) Coarse supervision ignores the semantic and distributional alignment.
To address the above issues, we present a \textbf{Corr}elation-decoupled \textbf{K}nowledge \textbf{D}istillation (CorrKD) framework for the MSA task under uncertain missing modalities.
There are three core contributions in CorrKD based on the tailored components.
Specifically,
(\romannumeral1) the proposed sample-level contrastive distillation mechanism captures the holistic cross-sample correlations and transfers valuable supervision signals via sample-level contrastive learning. (\romannumeral2) Meanwhile, we design a category-guided prototype distillation mechanism that leverages category prototypes to transfer intra- and inter-category feature variations, thus delivering sentiment-relevant information and learning robust joint multimodal representations. (\romannumeral3) Furthermore, we introduce a response-disentangled consistency distillation strategy to optimize sentiment decision boundaries and encourage distribution alignment by decoupling heterogeneous responses and maximizing mutual information between homogeneous sub-responses. Based on these components, CorrKD significantly improves MSA performance under uncertain missing-modality and complete-modality testing conditions on three multimodal benchmarks.
\vspace{-3pt}
Related Work
\subsection{Multimodal Sentiment Analysis}
MSA aims to understand and analyze human sentiment utilizing multiple modalities.
Mainstream MSA studies <|cite_start|> (Reference: Misa: Modality-invariant and-specific representations for multimodal sentiment analysis: Multimodal Sentiment Analysis is an active area of research that leverages multimodal signals for affective understanding of user-generated videos. The predominant approach, addressing this task, has been to develop sophisticated fusion techniques. However, the heterogeneous nature of the signals creates distributional modality gaps that pose significant challenges. In this paper, we aim to learn effective modality representations to aid the process of fusion. We propose a novel framework, MISA, which projects each modality to two distinct subspaces. The first subspace is modality-invariant, where the representations across modalities learn their commonalities and reduce the modality gap. The second subspace is modality-specific, which is private to each modality and captures their characteristic features. These representations provide a holistic view of the multimodal data, which is used for fusion that leads to task predictions. Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models. We also consider the task of Multimodal Humor Detection and experiment on the recently proposed UR_FUNNY dataset. Here too, our model fares better than strong baselines, establishing MISA as a useful multimodal framework.) <|cite_end|> <|cite_start|> (Reference: Improving Multimodal Fusion with Hierarchical Mutual Information Maximization for Multimodal Sentiment Analysis: In multimodal sentiment analysis (MSA), the performance of a model highly depends on the quality of synthesized embeddings. These embeddings are generated from the upstream process called multimodal fusion, which aims to extract and combine the input unimodal raw data to produce a richer multimodal representation. Previous work either back-propagates the task loss or manipulates the geometric property of feature spaces to produce favorable fusion results, which neglects the preservation of critical task-related information that flows from input to the fusion results. In this work, we propose a framework named MultiModal InfoMax (MMIM), which hierarchically maximizes the Mutual Information (MI) in unimodal input pairs (inter-modality) and between multimodal fusion result and unimodal input in order to maintain task-related information through multimodal fusion. The framework is jointly trained with the main task (MSA) to improve the performance of the downstream MSA task. To address the intractable issue of MI bounds, we further formulate a set of computationally simple parametric and non-parametric methods to approximate their truth value. Experimental results on the two widely used datasets demonstrate the efficacy of our approach. The implementation of this work is publicly available at https://github.com/declare-lab/Multimodal-Infomax.) <|cite_end|> <|cite_start|> (Reference: CubeMLP: An MLP-based Model for Multimodal Sentiment Analysis and Depression Estimation: Multimodal sentiment analysis and depression estimation are two important research topics that aim to predict human mental states using multimodal data. Previous research has focused on developing effective fusion strategies for exchanging and integrating mind-related information from different modalities. Some MLP-based techniques have recently achieved considerable success in a variety of computer vision tasks. Inspired by this, we explore multimodal approaches with a feature-mixing perspective in this study. To this end, we introduce CubeMLP, a multimodal feature processing framework based entirely on MLP. CubeMLP consists of three independent MLP units, each of which has two affine transformations. CubeMLP accepts all relevant modality features as input and mixes them across three axes. After extracting the characteristics using CubeMLP, the mixed multimodal features are flattened for task predictions. Our experiments are conducted on sentiment analysis datasets: CMU-MOSI and CMU-MOSEI, and depression estimation dataset: AVEC2019. The results show that CubeMLP can achieve state-of-the-art performance with a much lower computing cost.) <|cite_end|> <|cite_start|> (Reference: Decoupled Multimodal Distilling for Emotion Recognition: Human multimodal emotion recognition (MER) aims to perceive human emotions via language, visual and acoustic modalities. Despite the impressive performance of previous MER approaches, the inherent multimodal heterogeneities still haunt and the contribution of different modalities varies significantly. In this work, we mitigate this issue by proposing a decoupled multimodal distillation (DMD) approach that facilitates flexible and adaptive crossmodal knowledge distillation, aiming to enhance the discriminative features of each modality. Specially, the representation of each modality is decoupled into two parts, i.e., modality-irrelevant/-exclusive spaces, in a self-regression manner. DMD utilizes a graph distillation unit (GD-Unit) for each decoupled part so that each GD can be performed in a more specialized and effective manner. A GD-Unit consists of a dynamic graph where each vertice represents a modality and each edge indicates a dynamic knowledge distillation. Such GD paradigm provides a flexible knowledge transfer manner where the distillation weights can be automatically learned, thus enabling diverse crossmodal knowledge transfer patterns. Experimental results show DMD consistently obtains superior performance than state-of-the-art MER methods. Visualization results show the graph edges in DMD exhibit meaningful distributional patterns w.r.t. the modality-irrelevant/-exclusive feature spaces. Codes are released at \url{https://github.com/mdswyz/DMD}.) <|cite_end|> <|cite_start|> (Reference: Robust Emotion Recognition in Context Debiasing: Context-aware emotion recognition (CAER) has recently boosted the practical applications of affective computing techniques in unconstrained environments. Mainstream CAER methods invariably extract ensemble representations from diverse contexts and subject-centred characteristics to perceive the target person's emotional state. Despite advancements, the biggest challenge remains due to context bias interference. The harmful bias forces the models to rely on spurious correlations between background contexts and emotion labels in likelihood estimation, causing severe performance bottlenecks and confounding valuable context priors. In this paper, we propose a counterfactual emotion inference (CLEF) framework to address the above issue. Specifically, we first formulate a generalized causal graph to decouple the causal relationships among the variables in CAER. Following the causal graph, CLEF introduces a non-invasive context branch to capture the adverse direct effect caused by the context bias. During the inference, we eliminate the direct context effect from the total causal effect by comparing factual and counterfactual outcomes, resulting in bias mitigation and robust prediction. As a model-agnostic framework, CLEF can be readily integrated into existing methods, bringing consistent performance gains.) <|cite_end|> <|cite_start|> (Reference: Context De-confounded Emotion Recognition: Context-Aware Emotion Recognition (CAER) is a crucial and challenging task that aims to perceive the emotional states of the target person with contextual information. Recent approaches invariably focus on designing sophisticated architectures or mechanisms to extract seemingly meaningful representations from subjects and contexts. However, a long-overlooked issue is that a context bias in existing datasets leads to a significantly unbalanced distribution of emotional states among different context scenarios. Concretely, the harmful bias is a confounder that misleads existing models to learn spurious correlations based on conventional likelihood estimation, significantly limiting the models' performance. To tackle the issue, this paper provides a causality-based perspective to disentangle the models from the impact of such bias, and formulate the causalities among variables in the CAER task via a tailored causal graph. Then, we propose a Contextual Causal Intervention Module (CCIM) based on the backdoor adjustment to de-confound the confounder and exploit the true causal effect for model training. CCIM is plug-in and model-agnostic, which improves diverse state-of-the-art approaches by considerable margins. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our CCIM and the significance of causal insight.) <|cite_end|> <|cite_start|> (Reference: Target and source modality co-reinforcement for emotion understanding from asynchronous multimodal sequences: ) <|cite_end|> <|cite_start|> (Reference: Towards Multimodal Human Intention Understanding Debiasing via Subject-Deconfounding: Multimodal intention understanding (MIU) is an indispensable component of human expression analysis (e.g., sentiment or humor) from heterogeneous modalities, including visual postures, linguistic contents, and acoustic behaviors. Existing works invariably focus on designing sophisticated structures or fusion strategies to achieve impressive improvements. Unfortunately, they all suffer from the subject variation problem due to data distribution discrepancies among subjects. Concretely, MIU models are easily misled by distinct subjects with different expression customs and characteristics in the training data to learn subject-specific spurious correlations, significantly limiting performance and generalizability across uninitiated subjects.Motivated by this observation, we introduce a recapitulative causal graph to formulate the MIU procedure and analyze the confounding effect of subjects. Then, we propose SuCI, a simple yet effective causal intervention module to disentangle the impact of subjects acting as unobserved confounders and achieve model training via true causal effects. As a plug-and-play component, SuCI can be widely applied to most methods that seek unbiased predictions. Comprehensive experiments on several MIU benchmarks clearly demonstrate the effectiveness of the proposed module.) <|cite_end|> <|cite_start|> (Reference: Towards Multimodal Sentiment Analysis Debiasing via Bias Purification: Multimodal Sentiment Analysis (MSA) aims to understand human intentions by integrating emotion-related clues from diverse modalities, such as visual, language, and audio. Unfortunately, the current MSA task invariably suffers from unplanned dataset biases, particularly multimodal utterance-level label bias and word-level context bias. These harmful biases potentially mislead models to focus on statistical shortcuts and spurious correlations, causing severe performance bottlenecks. To alleviate these issues, we present a Multimodal Counterfactual Inference Sentiment (MCIS) analysis framework based on causality rather than conventional likelihood. Concretely, we first formulate a causal graph to discover harmful biases from already-trained vanilla models. In the inference phase, given a factual multimodal input, MCIS imagines two counterfactual scenarios to purify and mitigate these biases. Then, MCIS can make unbiased decisions from biased observations by comparing factual and counterfactual outcomes. We conduct extensive experiments on several standard MSA benchmarks. Qualitative and quantitative results show the effectiveness of the proposed framework.) <|cite_end|> <|cite_start|> (Reference: Contextual and Cross-modal Interaction for Multi-modal Speech Emotion Recognition: Speech emotion recognition combining linguistic content and audio signals in the dialog is a challenging task. Nevertheless, previous approaches have failed to explore emotion cues in contextual interactions and ignored the long-range dependencies between elements from different modalities. To tackle the above issues, this letter proposes a multimodal speech emotion recognition method using audio and text data. We first present a contextual transformer module to introduce contextual information via embedding the previous utterances between interlocutors, which enhances the emotion representation of the current utterance. Then, the proposed cross-modal transformer module focuses on the interactions between text and audio modalities, adaptively promoting the fusion from one modality to another. Furthermore, we construct associative topological relation over mini-batch and learn the association between deep fused features with graph convolutional network. Experimental results on the IEMOCAP and MELD datasets show that our method outperforms current state-of-the-art methods.) <|cite_end|>focus on designing complex fusion paradigms and interaction mechanisms to enhance the performance of sentiment recognition. For instance, CubeMLP <|cite_start|> (Reference: CubeMLP: An MLP-based Model for Multimodal Sentiment Analysis and Depression Estimation: Multimodal sentiment analysis and depression estimation are two important research topics that aim to predict human mental states using multimodal data. Previous research has focused on developing effective fusion strategies for exchanging and integrating mind-related information from different modalities. Some MLP-based techniques have recently achieved considerable success in a variety of computer vision tasks. Inspired by this, we explore multimodal approaches with a feature-mixing perspective in this study. To this end, we introduce CubeMLP, a multimodal feature processing framework based entirely on MLP. CubeMLP consists of three independent MLP units, each of which has two affine transformations. CubeMLP accepts all relevant modality features as input and mixes them across three axes. After extracting the characteristics using CubeMLP, the mixed multimodal features are flattened for task predictions. Our experiments are conducted on sentiment analysis datasets: CMU-MOSI and CMU-MOSEI, and depression estimation dataset: AVEC2019. The results show that CubeMLP can achieve state-of-the-art performance with a much lower computing cost.) <|cite_end|>utilizes three independent multi-layer perceptron units for feature-mixing on three axes.
However, these approaches based on complete modalities cannot be deployed in real-world applications.
Mainstream solutions for the missing modality problem can be summarized in two categories: (i) generative methods <|cite_start|> (Reference: Semi-supervised Deep Generative Modelling of Incomplete Multi-Modality Emotional Data: There are threefold challenges in emotion recognition. First, it is difficult to recognize human's emotional states only considering a single modality. Second, it is expensive to manually annotate the emotional data. Third, emotional data often suffers from missing modalities due to unforeseeable sensor malfunction or configuration issues. In this paper, we address all these problems under a novel multi-view deep generative framework. Specifically, we propose to model the statistical relationships of multi-modality emotional data using multiple modality-specific generative networks with a shared latent space. By imposing a Gaussian mixture assumption on the posterior approximation of the shared latent variables, our framework can learn the joint deep representation from multiple modalities and evaluate the importance of each modality simultaneously. To solve the labeled-data-scarcity problem, we extend our multi-view model to semi-supervised learning scenario by casting the semi-supervised classification problem as a specialized missing data imputation task. To address the missing-modality problem, we further extend our semi-supervised multi-view model to deal with incomplete data, where a missing view is treated as a latent variable and integrated out during inference. This way, the proposed overall framework can utilize all available (both labeled and unlabeled, as well as both complete and incomplete) data to improve its generalization ability. The experiments conducted on two real multi-modal emotion datasets demonstrated the superiority of our framework.) <|cite_end|> <|cite_start|> (Reference: Multimodal Reconstruct and Align Net for Missing Modality Problem in Sentiment Analysis: ) <|cite_end|> <|cite_start|> (Reference: GCNet: Graph Completion Network for Incomplete Multimodal Learning in Conversation: Conversations have become a critical data format on social media platforms. Understanding conversation from emotion, content and other aspects also attracts increasing attention from researchers due to its widespread application in human-computer interaction. In real-world environments, we often encounter the problem of incomplete modalities, which has become a core issue of conversation understanding. To address this problem, researchers propose various methods. However, existing approaches are mainly designed for individual utterances rather than conversational data, which cannot fully exploit temporal and speaker information in conversations. To this end, we propose a novel framework for incomplete multimodal learning in conversations, called "Graph Complete Network (GCNet)", filling the gap of existing works. Our GCNet contains two well-designed graph neural network-based modules, "Speaker GNN" and "Temporal GNN", to capture temporal and speaker dependencies. To make full use of complete and incomplete data, we jointly optimize classification and reconstruction tasks in an end-to-end manner. To verify the effectiveness of our method, we conduct experiments on three benchmark conversational datasets. Experimental results demonstrate that our GCNet is superior to existing state-of-the-art approaches in incomplete multimodal learning. Code is available at https://github.com/zeroQiaoba/GCNet.) <|cite_end|> <|cite_start|> (Reference: Distribution-Consistent Modal Recovering for Incomplete Multimodal Learning: Recovering missing modality is popular in incomplete multimodal learning because it usually benefits downstream tasks. However, the existing methods often directly estimate missing modalities from the observed ones by deep neural networks, lacking consideration of the distribution gap between modalities, resulting in the inconsistency of distributions between the recovered and the true data. To mitigate this issue, in this work, we propose a novel recovery paradigm, Distribution-Consistent Modal Recovering (DiCMoR), to transfer the distributions from available modalities to missing modalities, which thus maintains the distribution consistency of recovered data. In particular, we design a class-specific flow based modality recovery method to transform cross-modal distributions on the condition of sample class, which could well predict a distribution-consistent space for missing modality by virtue of the invertibility and exact density estimation of normalizing flow. The generated data from the predicted distribution is integrated with available modalities for the task of classification. Experiments show that DiCMoR gains superior performances and is more robust than existing state-of-the-art methods under various missing patterns. Visualization results show that the distribution gaps between recovered modalities and missing modalities are mitigated. Codes are released at https://github.com/mdswyz/DiCMoR.) <|cite_end|>and (ii) joint learning methods <|cite_start|> (Reference: Found in Translation: Learning Robust Joint Representations by Cyclic Translations Between Modalities: Multimodal sentiment analysis is a core research area that studies speaker sentiment expressed from the language, visual, and acoustic modalities. The central challenge in multimodal learning involves inferring joint representations that can process and relate information from these modalities. However, existing work learns joint representations by requiring all modalities as input and as a result, the learned representations may be sensitive to noisy or missing modalities at test time. With the recent success of sequence to sequence (Seq2Seq) models in machine translation, there is an opportunity to explore new ways of learning joint representations that may not require all input modalities at test time. In this paper, we propose a method to learn robust joint representations by translating between modalities. Our method is based on the key insight that translation from a source to a target modality provides a method of learning joint representations using only the source modality as input. We augment modality translations with a cycle consistency loss to ensure that our joint representations retain maximal information from all modalities. Once our translation model is trained with paired multimodal data, we only need data from the source modality at test time for final sentiment prediction. This ensures that our model remains robust from perturbations or missing information in the other modalities. We train our model with a coupled translation-prediction objective and it achieves new state-of-the-art results on multimodal sentiment analysis datasets: CMU-MOSI, ICT-MMMO, and YouTube. Additional experiments show that our model learns increasingly discriminative joint representations with more input modalities while maintaining robustness to missing or perturbed modalities.) <|cite_end|> <|cite_start|> (Reference: TransModality: An End2End Fusion Method with Transformer for Multimodal Sentiment Analysis: Multimodal sentiment analysis is an important research area that predicts speaker's sentiment tendency through features extracted from textual, visual and acoustic modalities. The central challenge is the fusion method of the multimodal information. A variety of fusion methods have been proposed, but few of them adopt end-to-end translation models to mine the subtle correlation between modalities. Enlightened by recent success of Transformer in the area of machine translation, we propose a new fusion method, TransModality, to address the task of multimodal sentiment analysis. We assume that translation between modalities contributes to a better joint representation of speaker's utterance. With Transformer, the learned features embody the information both from the source modality and the target modality. We validate our model on multiple multimodal datasets: CMU-MOSI, MELD, IEMOCAP. The experiments show that our proposed method achieves the state-of-the-art performance.) <|cite_end|> <|cite_start|> (Reference: Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities: Multimodal sentiment analysis has been studied under the assumption that all modalities are available. However, such a strong assumption does not always hold in practice, and most of multimodal fusion models may fail when partial modalities are missing. Several works have addressed the missing modality problem; but most of them only considered the single modality missing case, and ignored the practically more general cases of multiple modalities missing. To this end, in this paper, we propose a Tag-Assisted Transformer Encoder (TATE) network to handle the problem of missing uncertain modalities. Specifically, we design a tag encoding module to cover both the single modality and multiple modalities missing cases, so as to guide the network's attention to those missing modalities. Besides, we adopt a new space projection pattern to align common vectors. Then, a Transformer encoder-decoder network is utilized to learn the missing modality features. At last, the outputs of the Transformer encoder are used for the final sentiment classification. Extensive experiments are conducted on CMU-MOSI and IEMOCAP datasets, showing that our method can achieve significant improvements compared with several baselines.) <|cite_end|> <|cite_start|> (Reference: Modality translation-based multimodal sentiment analysis under uncertain missing modalities: ) <|cite_end|>.
Reconstruction methods generate missing features and semantics in modalities based on available modalities. For example, TFR-Net <|cite_start|> (Reference: Transformer-based feature reconstruction network for robust multimodal sentiment analysis: Improving robustness against data missing has become one of the core challenges in Multimodal Sentiment Analysis (MSA), which aims to judge speaker sentiments from the language, visual, and acoustic signals. In the current research, translation-based methods and tensor regularization methods are proposed for MSA with incomplete modality features. However, both of them fail to cope with random modality feature missing in non-aligned sequences. In this paper, a transformer-based feature reconstruction network (TFR-Net) is proposed to improve the robustness of models for the random missing in non-aligned modality sequences. First, intra-modal and inter-modal attention-based extractors are adopted to learn robust representations for each element in modality sequences. Then, a reconstruction module is proposed to generate the missing modality features. With the supervision of SmoothL1Loss between generated and complete sequences, TFR-Net is expected to learn semantic-level features corresponding to missing features. Extensive experiments on two public benchmark datasets show that our model achieves good results against data missing across various missing modality combinations and various missing degrees.) <|cite_end|>leverages the feature reconstruction module to guide the extractor to reconstruct missing semantics. MVAE <|cite_start|> (Reference: Semi-supervised Deep Generative Modelling of Incomplete Multi-Modality Emotional Data: There are threefold challenges in emotion recognition. First, it is difficult to recognize human's emotional states only considering a single modality. Second, it is expensive to manually annotate the emotional data. Third, emotional data often suffers from missing modalities due to unforeseeable sensor malfunction or configuration issues. In this paper, we address all these problems under a novel multi-view deep generative framework. Specifically, we propose to model the statistical relationships of multi-modality emotional data using multiple modality-specific generative networks with a shared latent space. By imposing a Gaussian mixture assumption on the posterior approximation of the shared latent variables, our framework can learn the joint deep representation from multiple modalities and evaluate the importance of each modality simultaneously. To solve the labeled-data-scarcity problem, we extend our multi-view model to semi-supervised learning scenario by casting the semi-supervised classification problem as a specialized missing data imputation task. To address the missing-modality problem, we further extend our semi-supervised multi-view model to deal with incomplete data, where a missing view is treated as a latent variable and integrated out during inference. This way, the proposed overall framework can utilize all available (both labeled and unlabeled, as well as both complete and incomplete) data to improve its generalization ability. The experiments conducted on two real multi-modal emotion datasets demonstrated the superiority of our framework.) <|cite_end|>solves the modality missing problem by the semi-supervised multi-view deep generative framework. Joint learning efforts refer to learning joint multimodal representations utilizing correlations among modalities. For instance, MMIN <|cite_start|> (Reference: Missing modality imagination network for emotion recognition with uncertain missing modalities: Multimodal fusion has been proved to improve emotion recognition performance in previous works. However, in real-world applications, we often encounter the problem of missing modality, and which modalities will be missing is uncertain. It makes the fixed multimodal fusion fail in such cases. In this work, we propose a unified model, Missing Modality Imagination Network (MMIN), to deal with the uncertain missing modality problem. MMIN learns robust joint multimodal representations, which can predict the representation of any missing modality given available modalities under different missing modality conditions.Comprehensive experiments on two benchmark datasets demonstrate that the unified MMIN model significantly improves emotion recognition performance under both uncertain missing-modality testing conditions and full-modality ideal testing condition. The code will be available at https://github.com/AIM3-RUC/MMIN.) <|cite_end|>generates robust joint multimodal representations via cross-modality imagination. TATE <|cite_start|> (Reference: Tag-assisted Multimodal Sentiment Analysis under Uncertain Missing Modalities: Multimodal sentiment analysis has been studied under the assumption that all modalities are available. However, such a strong assumption does not always hold in practice, and most of multimodal fusion models may fail when partial modalities are missing. Several works have addressed the missing modality problem; but most of them only considered the single modality missing case, and ignored the practically more general cases of multiple modalities missing. To this end, in this paper, we propose a Tag-Assisted Transformer Encoder (TATE) network to handle the problem of missing uncertain modalities. Specifically, we design a tag encoding module to cover both the single modality and multiple modalities missing cases, so as to guide the network's attention to those missing modalities. Besides, we adopt a new space projection pattern to align common vectors. Then, a Transformer encoder-decoder network is utilized to learn the missing modality features. At last, the outputs of the Transformer encoder are used for the final sentiment classification. Extensive experiments are conducted on CMU-MOSI and IEMOCAP datasets, showing that our method can achieve significant improvements compared with several baselines.) <|cite_end|>presents a tag encoding module to guide the network to focus on missing modalities.
However, the aforementioned approaches fail to account for the correlations among samples and categories, leading to inadequate compensation for the missing semantics in modalities.
In contrast, we design effective learning paradigms to adequately capture potential inter-sample and inter-category correlations.
\subsection{Knowledge Distillation}
Knowledge distillation utilizes additional supervisory information from the pre-trained teacher's network to assist in the training of the student's network <|cite_start|> (Reference: Distilling the Knowledge in a Neural Network: A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.) <|cite_end|>.
Knowledge distillation methods can be roughly categorized into two types, distillation from intermediate features <|cite_start|> (Reference: Paraphrasing Complex Network: Network Compression via Factor Transfer: Many researchers have sought ways of model compression to reduce the size of a deep neural network (DNN) with minimal performance degradation in order to use DNNs in embedded systems. Among the model compression methods, a method called knowledge transfer is to train a student network with a stronger teacher network. In this paper, we propose a novel knowledge transfer method which uses convolutional operations to paraphrase teacher's knowledge and to translate it for the student. This is done by two convolutional modules, which are called a paraphraser and a translator. The paraphraser is trained in an unsupervised manner to extract the teacher factors which are defined as paraphrased information of the teacher network. The translator located at the student network extracts the student factors and helps to translate the teacher factors by mimicking them. We observed that our student network trained with the proposed factor transfer method outperforms the ones trained with conventional knowledge transfer methods.) <|cite_end|> <|cite_start|> (Reference: Relational Knowledge Distillation: Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of individual data examples represented by the teacher. We introduce a novel approach, dubbed relational knowledge distillation (RKD), that transfers mutual relations of data examples instead. For concrete realizations of RKD, we propose distance-wise and angle-wise distillation losses that penalize structural differences in relations. Experiments conducted on different tasks show that the proposed method improves educated student models with a significant margin. In particular for metric learning, it allows students to outperform their teachers' performance, achieving the state of the arts on standard benchmark datasets.) <|cite_end|> <|cite_start|> (Reference: Contrastive Representation Distillation: Often we wish to transfer representational knowledge from one neural network to another. Examples include distilling a large network into a smaller one, transferring knowledge from one sensory modality to a second, or ensembling a collection of models into a single estimator. Knowledge distillation, the standard approach to these problems, minimizes the KL divergence between the probabilistic outputs of a teacher and student network. We demonstrate that this objective ignores important structural knowledge of the teacher network. This motivates an alternative objective by which we train a student to capture significantly more information in the teacher's representation of the data. We formulate this objective as contrastive learning. Experiments demonstrate that our resulting new objective outperforms knowledge distillation and other cutting-edge distillers on a variety of knowledge transfer tasks, including single model compression, ensemble distillation, and cross-modal transfer. Our method sets a new state-of-the-art in many transfer tasks, and sometimes even outperforms the teacher network when combined with knowledge distillation. Code: http://github.com/HobbitLong/RepDistiller.) <|cite_end|> <|cite_start|> (Reference: A gift from knowledge distillation: fast optimization, network minimization and transfer learning: We introduce a novel technique for knowledge transfer, where knowledge from a pretrained deep neural network (DNN) is distilled and transferred to another DNN. As the DNN performs a mapping from the input space to the output space through many layers sequentially, we define the distilled knowledge to be transferred in terms of flow between layers, which is calculated by computing the inner product between features from two layers. When we compare the student DNN and the original network with the same size as the student DNN but trained without a teacher network, the proposed method of transferring the distilled knowledge as the flow between two layers exhibits three important phenomena: (1) the student DNN that learns the distilled knowledge is optimized much faster than the original model, (2) the student DNN outperforms the original DNN, and (3) the student DNN can learn the distilled knowledge from a teacher DNN that is trained at a different task, and the student DNN outperforms the original DNN that is trained from scratch.) <|cite_end|>and responses <|cite_start|> (Reference: On the Efficacy of Knowledge Distillation: In this paper, we present a thorough evaluation of the efficacy of knowledge distillation and its dependence on student and teacher architectures. Starting with the observation that more accurate teachers often don't make good teachers, we attempt to tease apart the factors that affect knowledge distillation performance. We find crucially that larger models do not often make better teachers. We show that this is a consequence of mismatched capacity, and that small students are unable to mimic large teachers. We find typical ways of circumventing this (such as performing a sequence of knowledge distillation steps) to be ineffective. Finally, we show that this effect can be mitigated by stopping the teacher's training early. Our results generalize across datasets and models.) <|cite_end|> <|cite_start|> (Reference: Born Again Neural Networks: Knowledge Distillation (KD) consists of transferring âknowledgeâ from one machine learning model (the teacher) to another (the student). Commonly, the teacher is a high-capacity model with formidable performance, while the student is more compact. By transferring knowledge, one hopes to benefit from the studentâs compactness, without sacrificing too much performance. We study KD from a new perspective: rather than compressing models, we train students parameterized identically to their teachers. Surprisingly, these Born-Again Networks (BANs), outperform their teachers significantly, both on computer vision and language modeling tasks. Our experiments with BANs based on DenseNets demonstrate state-of-the-art performance on the CIFAR-10 (3.5%) and CIFAR-100 (15.5%) datasets, by validation error. Additional experiments explore two distillation objectives: (i) Confidence-Weighted by Teacher Max (CWTM) and (ii) Dark Knowledge with Permuted Predictions (DKPP). Both methods elucidate the essential components of KD, demonstrating the effect of the teacher outputs on both predicted and non-predicted classes.) <|cite_end|> <|cite_start|> (Reference: Improved Knowledge Distillation via Teacher Assistant: Despite the fact that deep neural networks are powerful models and achieve appealing results on many tasks, they are too large to be deployed on edge devices like smartphones or embedded sensor nodes. There have been efforts to compress these networks, and a popular method is knowledge distillation, where a large (teacher) pre-trained network is used to train a smaller (student) network. However, in this paper, we show that the student network performance degrades when the gap between student and teacher is large. Given a fixed student network, one cannot employ an arbitrarily large teacher, or in other words, a teacher can effectively transfer its knowledge to students up to a certain size, not smaller. To alleviate this shortcoming, we introduce multi-step knowledge distillation, which employs an intermediate-sized network (teacher assistant) to bridge the gap between the student and the teacher. Moreover, we study the effect of teacher assistant size and extend the framework to multi-step distillation. Theoretical analysis and extensive experiments on CIFAR-10,100 and ImageNet datasets and on CNN and ResNet architectures substantiate the effectiveness of our proposed approach.) <|cite_end|> <|cite_start|> (Reference: Snapshot Distillation: Teacher-Student Optimization in One Generation: Optimizing a deep neural network is a fundamental task in computer vision, yet direct training methods often suffer from over-fitting. Teacher-student optimization aims at providing complementary cues from a model trained previously, but these approaches are often considerably slow due to the pipeline of training a few generations in sequence, i.e., time complexity is increased by several times. This paper presents snapshot distillation (SD), the first framework which enables teacher-student optimization in one generation. The idea of SD is very simple: instead of borrowing supervision signals from previous generations, we extract such information from earlier epochs in the same generation, meanwhile make sure that the difference between teacher and student is sufficiently large so as to prevent under-fitting. To achieve this goal, we implement SD in a cyclic learning rate policy, in which the last snapshot of each cycle is used as the teacher for all iterations in the next cycle, and the teacher signal is smoothed to provide richer information. In standard image classification benchmarks such as CIFAR100 and ILSVRC2012, SD achieves consistent accuracy gain without heavy computational overheads. We also verify that models pre-trained with SD transfers well to object detection and semantic segmentation in the PascalVOC dataset.) <|cite_end|> | [
"<|reference_start|> Misa: Modality-invariant and-specific representations for multimodal sentiment analysis: Multimodal Sentiment Analysis is an active area of research that leverages multimodal signals for affective understanding of user-generated videos. The predominant approach, addressing this task, has been to develop sophisticated fusion techniques. However, the heterogeneous nature of the signals creates distributional modality gaps that pose significant challenges. In this paper, we aim to learn effective modality representations to aid the process of fusion. We propose a novel framework, MISA, which projects each modality to two distinct subspaces. The first subspace is modality-invariant, where the representations across modalities learn their commonalities and reduce the modality gap. The second subspace is modality-specific, which is private to each modality and captures their characteristic features. These representations provide a holistic view of the multimodal data, which is used for fusion that leads to task predictions. Our experiments on popular sentiment analysis benchmarks, MOSI and MOSEI, demonstrate significant gains over state-of-the-art models. We also consider the task of Multimodal Humor Detection and experiment on the recently proposed UR_FUNNY dataset. Here too, our model fares better than strong baselines, establishing MISA as a useful multimodal framework. <|reference_end|>",
"<|reference_start|> Modality translation-based multimodal sentiment analysis under uncertain missing modalities: <|reference_end|>",
"<|reference_start|> A Unified Self-Distillation Framework for Multimodal Sentiment Analysis with Uncertain Missing Modalities: Multimodal Sentiment Analysis (MSA) has attracted widespread research attention recently. Most MSA studies are based on the assumption of modality completeness. However, many inevitable factors in real-world scenarios lead to uncertain missing modalities, which invalidate the fixed multimodal fusion approaches. To this end, we propose a Unified multimodal Missing modality self-Distillation Framework (UMDF) to handle the problem of uncertain missing modalities in MSA. Specifically, a unified self-distillation mechanism in UMDF drives a single network to automatically learn robust inherent representations from the consistent distribution of multimodal data. Moreover, we present a multi-grained crossmodal interaction module to deeply mine the complementary semantics among modalities through coarse- and fine-grained crossmodal attention. Eventually, a dynamic feature integration module is introduced to enhance the beneficial semantics in incomplete modalities while filtering the redundant information therein to obtain a refined and robust multimodal representation. Comprehensive experiments on three datasets demonstrate that our framework significantly improves MSA performance under both uncertain missing-modality and complete-modality testing conditions. <|reference_end|>",
"<|reference_start|> Context De-confounded Emotion Recognition: Context-Aware Emotion Recognition (CAER) is a crucial and challenging task that aims to perceive the emotional states of the target person with contextual information. Recent approaches invariably focus on designing sophisticated architectures or mechanisms to extract seemingly meaningful representations from subjects and contexts. However, a long-overlooked issue is that a context bias in existing datasets leads to a significantly unbalanced distribution of emotional states among different context scenarios. Concretely, the harmful bias is a confounder that misleads existing models to learn spurious correlations based on conventional likelihood estimation, significantly limiting the models' performance. To tackle the issue, this paper provides a causality-based perspective to disentangle the models from the impact of such bias, and formulate the causalities among variables in the CAER task via a tailored causal graph. Then, we propose a Contextual Causal Intervention Module (CCIM) based on the backdoor adjustment to de-confound the confounder and exploit the true causal effect for model training. CCIM is plug-in and model-agnostic, which improves diverse state-of-the-art approaches by considerable margins. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our CCIM and the significance of causal insight. <|reference_end|>"
] | [
11,
29,
31,
38
] | {"<|cite_1|>": "ss-1297477", "<|cite_2|>": "ss-855006", "<|multi_cite_3_1|>": "arxiv-337445", "<|multi_cite_3_2|>": "ss-1232722", "<|multi_cite_4_1|>": "arxiv-526314", "<|multi_cite_4_2|>": "arxiv-526312", "<|multi_cite_4_3|>": "ss-752494", "<|multi_cite_4_4|>": "ss-1600009", "<|multi_cite_4_5|>": "arxiv-391994", "<|multi_cite_4_6|>": "arxiv-574431", "<|multi_cite_4_7|>": "arxiv-482663", "<|multi_cite_5_1|>": "ss-950046", "<|multi_cite_5_2|>": "arxiv-320050", "<|multi_cite_5_3|>": "ss-2097032", "<|multi_cite_5_4|>": "ss-843493", "<|multi_cite_5_5|>": "ss-1549173", "<|multi_cite_5_6|>": "arxiv-491637", "<|multi_cite_5_7|>": "arxiv-593781", "<|multi_cite_5_8|>": "arxiv-490726", "<|multi_cite_5_9|>": "ss-2281788", "<|multi_cite_5_10|>": "arxiv-593271", "<|multi_cite_5_11|>": "arxiv-593269", "<|multi_cite_5_12|>": "ss-1600567", "<|multi_cite_5_13|>": "arxiv-526009", "<|multi_cite_6_1|>": "arxiv-403306", "<|multi_cite_6_2|>": "ss-739946", "<|multi_cite_6_3|>": "arxiv-185057", "<|multi_cite_6_4|>": "arxiv-288526", "<|multi_cite_6_5|>": "arxiv-416192", "<|multi_cite_6_6|>": "ss-2114931", "<|multi_cite_6_7|>": "ss-2087797", "<|multi_cite_6_8|>": "ss-2087798", "<|cite_7|>": "arxiv-185057", "<|multi_cite_8_1|>": "ss-950046", "<|multi_cite_8_2|>": "arxiv-364187", "<|multi_cite_8_3|>": "arxiv-436944", "<|multi_cite_8_4|>": "arxiv-491637", "<|multi_cite_8_5|>": "arxiv-593781", "<|multi_cite_8_6|>": "arxiv-490726", "<|multi_cite_8_7|>": "ss-2281788", "<|multi_cite_8_8|>": "arxiv-593271", "<|multi_cite_8_9|>": "arxiv-593269", "<|multi_cite_8_10|>": "ss-1600567", "<|cite_9|>": "arxiv-436944", "<|multi_cite_10_1|>": "arxiv-168451", "<|multi_cite_10_2|>": "ss-2224984", "<|multi_cite_10_3|>": "arxiv-403306", "<|multi_cite_10_4|>": "ss-739946", "<|multi_cite_11_1|>": "arxiv-185057", "<|multi_cite_11_2|>": "arxiv-288526", "<|multi_cite_11_3|>": "arxiv-416192", "<|multi_cite_11_4|>": "ss-2114931", "<|cite_12|>": "ss-1232723", "<|cite_13|>": "arxiv-168451", "<|cite_14|>": "ss-1838626", "<|cite_15|>": "arxiv-416192", "<|cite_16|>": "arxiv-74282", "<|multi_cite_17_1|>": "arxiv-148337", "<|multi_cite_17_2|>": "arxiv-199249", "<|multi_cite_17_3|>": "arxiv-230414", "<|multi_cite_17_4|>": "ss-1422884", "<|multi_cite_18_1|>": "arxiv-226947", "<|multi_cite_18_2|>": "arxiv-158230", "<|multi_cite_18_3|>": "arxiv-190894", "<|multi_cite_18_4|>": "arxiv-182712", "<|multi_cite_18_5|>": "arxiv-111759", "<|multi_cite_19_1|>": "arxiv-349184", "<|multi_cite_19_2|>": "ss-1873018", "<|multi_cite_19_3|>": "arxiv-220736", "<|multi_cite_19_4|>": "arxiv-544649", "<|multi_cite_19_5|>": "arxiv-497397", "<|cite_20|>": "arxiv-349184"} |
2409.13464-1 | <|cite_start|> (Reference: Reverse Attention-Based Residual Network for Salient Object Detection: Benefiting from the quick development of deep convolutional neural networks, especially fully convolutional neural networks (FCNs), remarkable progresses have been achieved on salient object detection recently. Nevertheless, these FCNs based methods are still challenging to generate high resolution saliency maps, and also not applicable for subsequent applications due to their heavy model weights. In this paper, we propose a compact and efficient deep network with high accuracy for salient object detection. Firstly, we propose two strategies for initial prediction, one is a new designed multi-scale context module, the other is incorporating hand-crafted saliency priors. Secondly, we employ residual learning to refine it progressively by only learning the residual in each side-output, which can be achieved with few convolutional parameters, therefore leads to high compactness and high efficiency. Finally, we further design a novel top-down reverse attention block to guide the above side-output residual learning. Specifically, the current predicted salient regions are used to erase its side-output feature, thus the missing object parts and details can be efficiently learned from these unerased regions, which results in more complete detection and high accuracy. Extensive experimental results on seven benchmark datasets demonstrate that the proposed network performs favorably against the state-of-the-art approaches, and shows advantages in simplicity, compactness and efficiency.) <|cite_end|>designed a reverse attention block to perform an attention fusion operation on different side-output layers.
Liu et al. <|cite_start|> (Reference: Visual Saliency Transformer: Existing state-of-the-art saliency detection methods heavily rely on CNN-based architectures. Alternatively, we rethink this task from a convolution-free sequence-to-sequence perspective and predict saliency by modeling long-range dependencies, which can not be achieved by convolution. Specifically, we develop a novel unified model based on a pure transformer, namely, Visual Saliency Transformer (VST), for both RGB and RGB-D salient object detection (SOD). It takes image patches as inputs and leverages the transformer to propagate global contexts among image patches. Unlike conventional architectures used in Vision Transformer (ViT), we leverage multi-level token fusion and propose a new token upsampling method under the transformer framework to get high-resolution detection results. We also develop a token-based multi-task decoder to simultaneously perform saliency and boundary detection by introducing task-related tokens and a novel patch-task-attention mechanism. Experimental results show that our model outperforms existing methods on both RGB and RGB-D SOD benchmark datasets. Most importantly, our whole framework not only provides a new perspective for the SOD field but also shows a new paradigm for transformer-based dense prediction models. Code is available at https://github.com/nnizhang/VST.) <|cite_end|>employed the multi-head self-attention and patch-task-attention mechanism to perform saliency detection. <|cite_start|> (Reference: TRACER: Extreme Attention Guided Salient Object Tracing Network: Existing studies on salient object detection (SOD) focus on extracting distinct objects with edge features and aggregating multi-level features to improve SOD performance. However, both performance gain and computational efficiency cannot be achieved, which has motivated us to study the inefficiencies in existing encoder-decoder structures to avoid this trade-off. We propose TRACER which excludes multi-decoder structures and minimizes the learning parameters usage by employing attention guided tracing modules (ATMs), as shown in Fig. 1.) <|cite_end|>employed a masked edge attention module and a union attention module in the encoder and decoder, respectively. The former was used to propagate the refined edge information and the latter was applied to aggregate complementary channel and important spatial information. <|cite_start|> (Reference: Pyramidal Attention for Saliency Detection: Salient object detection (SOD) extracts meaningful contents from an input image. RGB-based SOD methods lack the complementary depth clues; hence, providing limited performance for complex scenarios. Similarly, RGB-D models process RGB and depth inputs, but the depth data availability during testing may hinder the model's practical applicability. This paper exploits only RGB images, estimates depth from RGB, and leverages the intermediate depth features. We employ a pyramidal attention structure to extract multi-level convolutional-transformer features to process initial stage representations and further enhance the subsequent ones. At each stage, the backbone transformer model produces global receptive fields and computing in parallel to attain fine-grained global predictions refined by our residual convolutional attention decoder for optimal saliency prediction. We report significantly improved performance against 21 and 40 state-of-the-art SOD methods on eight RGB and RGB-D datasets, respectively. Consequently, we present a new SOD perspective of generating RGB-D SOD without acquiring depth data during training and testing and assist RGB methods with depth clues for improved performance. The code and trained models are available at https://github.com/tanveer-hussain/EfficientSOD2) <|cite_end|>applied a residual convolutional attention decoder to conduct pyramidal attention manner for fine-grained saliency prediction generation.
In <|cite_start|> (Reference: Pyramid Grafting Network for One-Stage High Resolution Saliency Detection: Recent salient object detection (SOD) methods based on deep neural network have achieved remarkable performance. However, most of existing SOD models designed for low-resolution input perform poorly on high-resolution images due to the contradiction between the sampling depth and the receptive field size. Aiming at resolving this contradiction, we propose a novel one-stage framework called Pyramid Grafting Network (PGNet), using transformer and CNN backbone to extract features from different resolution images independently and then graft the features from transformer branch to CNN branch. An attention-based Cross-Model Grafting Module (CMGM) is proposed to enable CNN branch to combine broken detailed information more holistically, guided by different source feature during decoding process. Moreover, we design an Attention Guided Loss (AGL) to explicitly supervise the attention matrix generated by CMGM to help the network better interact with the attention from different models. We contribute a new Ultra-High-Resolution Saliency Detection dataset UHRSD, containing 5,920 images at 4K-8K resolutions. To our knowledge, it is the largest dataset in both quantity and resolution for high-resolution SOD task, which can be used for training and testing in future research. Sufficient experiments on UHRSD and widely-used SOD datasets demonstrate that our method achieves superior performance compared to the state-of-the-art methods.) <|cite_end|>, an attention-based cross-model grafting module and an attention-guided Loss are proposed to promote feature learning.
Ma et al. <|cite_start|> (Reference: Boosting broader receptive fields for salient object detection: Salient Object Detection has boomed in recent years and achieved impressive performance on regular-scale targets. However, existing methods encounter performance bottlenecks in processing objects with scale variation, especially extremely large- or small-scale objects with asymmetric segmentation requirements, since they are inefficient in obtaining more comprehensive receptive fields. With this issue in mind, this paper proposes a framework named BBRF for Boosting Broader Receptive Fields, which includes a Bilateral Extreme Stripping (BES) encoder, a Dynamic Complementary Attention Module (DCAM) and a Switch-Path Decoder (SPD) with a new boosting loss under the guidance of Loop Compensation Strategy (LCS). Specifically, we rethink the characteristics of the bilateral networks, and construct a BES encoder that separates semantics and details in an extreme way so as to get the broader receptive fields and obtain the ability to perceive extreme large- or small-scale objects. Then, the bilateral features generated by the proposed BES encoder can be dynamically filtered by the newly proposed DCAM. This module interactively provides spacial-wise and channel-wise dynamic attention weights for the semantic and detail branches of our BES encoder. Furthermore, we subsequently propose a Loop Compensation Strategy to boost the scale-specific features of multiple decision paths in SPD. These decision paths form a feature loop chain, which creates mutually compensating features under the supervision of boosting loss. Experiments on five benchmark datasets demonstrate that the proposed BBRF has a great advantage to cope with scale variation and can reduce the Mean Absolute Error over 20% compared with the state-of-the-art methods.) <|cite_end|>proposed a complementary attention module to dynamically provide spacial-wise and channel-wise attention for detail and semantic features from the encoder.
Recently, Zhou et al. <|cite_start|> (Reference: Multi-Type Self-Attention Guided Degraded Saliency Detection: Existing saliency detection techniques are sensitive to image quality and perform poorly on degraded images. In this paper, we systematically analyze the current status of the research on detecting salient objects from degraded images and then propose a new multi-type self-attention network, namely MSANet, for degraded saliency detection. The main contributions include: 1) Applying attention transfer learning to promote semantic detail perception and internal feature mining of the target network on degraded images; 2) Developing a multi-type self-attention mechanism to achieve the weight recalculation of multi-scale features. By computing global and local attention scores, we obtain the weighted features of different scales, effectively suppress the interference of noise and redundant information, and achieve a more complete boundary extraction. The proposed MSANet converts low-quality inputs to high-quality saliency maps directly in an end-to-end fashion. Experiments on seven widely-used datasets show that our approach produces good performance on both clear and degraded images.) <|cite_end|>proposed an attention transfer network for degraded SOD. However, it only implemented pixel-wise selections of features, which ignored the structural discontinuity and blurring characteristics of compressed images, resulting in limitations in CI SOD.
Although the aforementioned approaches can boost the SOD performance on clean images, they ignore the challenges of compressed images, which can be vulnerable in CI SOD tasks, showing less robustness on compressed images as shown in Fig. \ref{fig:benchmark_results}.
Differently, our work focuses on the under-explored CI SOD issue, and proposes a hybrid prior learning strategy (HPL). HPL considers the spatial correlation and crucial location information on compressed images, which can better learn robust representations to achieve better CI SOD.
\subsection{Knowledge Distillation}
Knowledge distillation (KD), also known as teacher-student learning, is a commonly used method for transferring information from a larger network to a small network, aiming to improve the efficiency of deep neural models.
Hinton et al. <|cite_start|> (Reference: Distilling the Knowledge in a Neural Network: A very simple way to improve the performance of almost any machine learning algorithm is to train many different models on the same data and then to average their predictions. Unfortunately, making predictions using a whole ensemble of models is cumbersome and may be too computationally expensive to allow deployment to a large number of users, especially if the individual models are large neural nets. Caruana and his collaborators have shown that it is possible to compress the knowledge in an ensemble into a single model which is much easier to deploy and we develop this approach further using a different compression technique. We achieve some surprising results on MNIST and we show that we can significantly improve the acoustic model of a heavily used commercial system by distilling the knowledge in an ensemble of models into a single model. We also introduce a new type of ensemble composed of one or more full models and many specialist models which learn to distinguish fine-grained classes that the full models confuse. Unlike a mixture of experts, these specialist models can be trained rapidly and in parallel.) <|cite_end|>first proposed the concept of KD, in which the output logits of the teacher network are transmitted to the student network to boost the image classification performance of the student network.
after this, KD has been widely used in many fields to boost the efficiency or performance of deep neural models.
For instance,
in <|cite_start|> (Reference: Low-resolution Face Recognition in the Wild via Selective Knowledge Distillation: Typically, the deployment of face recognition models in the wild needs to identify low-resolution faces with extremely low computational cost. To address this problem, a feasible solution is compressing a complex face model to achieve higher speed and lower memory at the cost of minimal performance drop. Inspired by that, this paper proposes a learning approach to recognize low-resolution faces via selective knowledge distillation. In this approach, a two-stream convolutional neural network (CNN) is first initialized to recognize high-resolution faces and resolution-degraded faces with a teacher stream and a student stream, respectively. The teacher stream is represented by a complex CNN for high-accuracy recognition, and the student stream is represented by a much simpler CNN for low-complexity recognition. To avoid significant performance drop at the student stream, we then selectively distil the most informative facial features from the teacher stream by solving a sparse graph optimization problem, which are then used to regularize the fine-tuning process of the student stream. In this way, the student stream is actually trained by simultaneously handling two tasks with limited computational resources: approximating the most informative facial cues via feature regression, and recovering the missing facial cues via low-resolution face classification. Experimental results show that the student stream performs impressively in recognizing low-resolution faces and costs only 0.15MB memory and runs at 418 faces per second on CPU and 9,433 faces per second on GPU.) <|cite_end|>, Ge et al. used a high-resolution and high-accuracy face network as a teacher and proposed to transfer knowledge from the teacher network to the low-resolution face network for performance improvement. <|cite_start|> (Reference: Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer: Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures. Code and models for our experiments are available at https://github.com/szagoruyko/attention-transfer) <|cite_end|>designed an attention distillation strategy at a pixel-wise level to improve the image classification performance of the lightweight network.
In <|cite_start|> (Reference: Mimicking very efficient network for object detection: Current CNN based object detectors need initialization from pre-trained ImageNet classification models, which are usually time-consuming. In this paper, we present a fully convolutional feature mimic framework to train very efficient CNN based detectors, which do not need ImageNet pre-training and achieve competitive performance as the large and slow models. We add supervision from high-level features of the large networks in training to help the small network better learn object representation. More specifically, we conduct a mimic method for the features sampled from the entire feature map and use a transform layer to map features from the small network onto the same dimension of the large network. In training the small network, we optimize the similarity between features sampled from the same region on the feature maps of both networks. Extensive experiments are conducted on pedestrian and common object detection tasks using VGG, Inception and ResNet. On both Caltech and Pascal VOC, we show that the modified 2.5× accelerated Inception network achieves competitive performance as the full Inception Network. Our faster model runs at 80 FPS for a 1000×1500 large input with only a minor degradation of performance on Caltech.) <|cite_end|>, a feature map mimic approach was developed, which used high-level feature activation from the large model and pixel-wise mimicking to assist a small object detection network.
Hu et al. <|cite_start|> (Reference: Boosting Light-Weight Depth Estimation Via Knowledge Distillation: Monocular depth estimation (MDE) methods are often either too computationally expensive or not accurate enough due to the trade-off between model complexity and inference performance. In this paper, we propose a lightweight network that can accurately estimate depth maps using minimal computing resources. We achieve this by designing a compact model architecture that maximally reduces model complexity. To improve the performance of our lightweight network, we adopt knowledge distillation (KD) techniques. We consider a large network as an expert teacher that accurately estimates depth maps on the target domain. The student, which is the lightweight network, is then trained to mimic the teacher's predictions. However, this KD process can be challenging and insufficient due to the large model capacity gap between the teacher and the student. To address this, we propose to use auxiliary unlabeled data to guide KD, enabling the student to better learn from the teacher's predictions. This approach helps fill the gap between the teacher and the student, resulting in improved data-driven learning. Our extensive experiments show that our method achieves comparable performance to state-of-the-art methods while using only 1% of their parameters. Furthermore, our method outperforms previous lightweight methods regarding inference accuracy, computational efficiency, and generalizability.) <|cite_end|>employed L1 loss to impose supervision on the depth map knowledge learning to improve the depth estimation performance for a lightweight network.
Farhadi et al. <|cite_start|> (Reference: TKD: Temporal Knowledge Distillation for Active Perception: Deep neural networks based methods have been proved to achieve outstanding performance on object detection and classification tasks. Despite significant performance improvement, due to the deep structures, they still require prohibitive runtime to process images and maintain the highest possible performance for real-time applications. Observing the phenomenon that human vision system (HVS) relies heavily on the temporal dependencies among frames from the visual input to conduct recognition efficiently, we propose a novel framework dubbed as TKD: temporal knowledge distillation. This framework distills the temporal knowledge from a heavy neural networks based model over selected video frames (the perception of the moments) to a light-weight model. To enable the distillation, we put forward two novel procedures: 1) an Long-short Term Memory (LSTM) based key frame selection method; and 2) a novel teacher-bounded loss design. To validate, we conduct comprehensive empirical evaluations using different object detection methods over multiple datasets including Youtube-Objects and Hollywood scene dataset. Our results show consistent improvement in accuracy-speed trad-offs for object detection over the frames of the dynamic scene, compare to other modern object recognition methods.) <|cite_end|>transferred the temporal knowledge over the selected video frame from a larger teacher network to the lightweight student model, boosting the efficiency of video recognition with a temporal knowledge distillation strategy.
Zhang et al. <|cite_start|> (Reference: KD-SCFNet: Towards More Accurate and Efficient Salient Object Detection via Knowledge Distillation: Most existing salient object detection (SOD) models are difficult to apply due to the complex and huge model structures. Although some lightweight models are proposed, the accuracy is barely satisfactory. In this paper, we design a novel semantics-guided contextual fusion network (SCFNet) that focuses on the interactive fusion of multi-level features for accurate and efficient salient object detection. Furthermore, we apply knowledge distillation to SOD task and provide a sizeable dataset KD-SOD80K. In detail, we transfer the rich knowledge from a seasoned teacher to the untrained SCFNet through unlabeled images, enabling SCFNet to learn a strong generalization ability to detect salient objects more accurately. The knowledge distillation based SCFNet (KDSCFNet) achieves comparable accuracy to the state-of-the-art heavyweight methods with less than 1M parameters and 174 FPS real-time detection speed. Extensive experiments demonstrate the robustness and effectiveness of the proposed distillation method and SOD framework. Code and data: https://github.com/zhangjinCV/KD-SCFNet.) <|cite_end|>effectively improved the salient object detection accuracy by first using a bigger pre-trained teacher network to provide a saliency prediction that is used to offer weak label supervision for an untrained tiny model.
However, in most of these methods, the student model is usually trained by distilling the individual pixel information. Such individual distillation neglects the spatial relationships between features and the importance of different features on compressed images, but such information plays an important role in CI SOD.
In contrast, we propose the HPL take into account spatial correlation and crucial features rather than just pixel-wise. Thus, we design tailored relation prior and location prior learning strategies, which explicitly exploit valuable relationship and salient region information, making our model suitable and more robust to broken structure and blurring areas of compressed images. <|paper_end|> | [
"<|reference_start|> Pyramidal Attention for Saliency Detection: Salient object detection (SOD) extracts meaningful contents from an input image. RGB-based SOD methods lack the complementary depth clues; hence, providing limited performance for complex scenarios. Similarly, RGB-D models process RGB and depth inputs, but the depth data availability during testing may hinder the model's practical applicability. This paper exploits only RGB images, estimates depth from RGB, and leverages the intermediate depth features. We employ a pyramidal attention structure to extract multi-level convolutional-transformer features to process initial stage representations and further enhance the subsequent ones. At each stage, the backbone transformer model produces global receptive fields and computing in parallel to attain fine-grained global predictions refined by our residual convolutional attention decoder for optimal saliency prediction. We report significantly improved performance against 21 and 40 state-of-the-art SOD methods on eight RGB and RGB-D datasets, respectively. Consequently, we present a new SOD perspective of generating RGB-D SOD without acquiring depth data during training and testing and assist RGB methods with depth clues for improved performance. The code and trained models are available at https://github.com/tanveer-hussain/EfficientSOD2 <|reference_end|>",
"<|reference_start|> Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Transfer: Attention plays a critical role in human visual experience. Furthermore, it has recently been demonstrated that attention can also play an important role in the context of applying artificial neural networks to a variety of tasks from fields such as computer vision and NLP. In this work we show that, by properly defining attention for convolutional neural networks, we can actually use this type of information in order to significantly improve the performance of a student CNN network by forcing it to mimic the attention maps of a powerful teacher network. To that end, we propose several novel methods of transferring attention, showing consistent improvement across a variety of datasets and convolutional neural network architectures. Code and models for our experiments are available at https://github.com/szagoruyko/attention-transfer <|reference_end|>",
"<|reference_start|> Mimicking very efficient network for object detection: Current CNN based object detectors need initialization from pre-trained ImageNet classification models, which are usually time-consuming. In this paper, we present a fully convolutional feature mimic framework to train very efficient CNN based detectors, which do not need ImageNet pre-training and achieve competitive performance as the large and slow models. We add supervision from high-level features of the large networks in training to help the small network better learn object representation. More specifically, we conduct a mimic method for the features sampled from the entire feature map and use a transform layer to map features from the small network onto the same dimension of the large network. In training the small network, we optimize the similarity between features sampled from the same region on the feature maps of both networks. Extensive experiments are conducted on pedestrian and common object detection tasks using VGG, Inception and ResNet. On both Caltech and Pascal VOC, we show that the modified 2.5× accelerated Inception network achieves competitive performance as the full Inception Network. Our faster model runs at 80 FPS for a 1000×1500 large input with only a minor degradation of performance on Caltech. <|reference_end|>",
"<|reference_start|> KD-SCFNet: Towards More Accurate and Efficient Salient Object Detection via Knowledge Distillation: Most existing salient object detection (SOD) models are difficult to apply due to the complex and huge model structures. Although some lightweight models are proposed, the accuracy is barely satisfactory. In this paper, we design a novel semantics-guided contextual fusion network (SCFNet) that focuses on the interactive fusion of multi-level features for accurate and efficient salient object detection. Furthermore, we apply knowledge distillation to SOD task and provide a sizeable dataset KD-SOD80K. In detail, we transfer the rich knowledge from a seasoned teacher to the untrained SCFNet through unlabeled images, enabling SCFNet to learn a strong generalization ability to detect salient objects more accurately. The knowledge distillation based SCFNet (KDSCFNet) achieves comparable accuracy to the state-of-the-art heavyweight methods with less than 1M parameters and 174 FPS real-time detection speed. Extensive experiments demonstrate the robustness and effectiveness of the proposed distillation method and SOD framework. Code and data: https://github.com/zhangjinCV/KD-SCFNet. <|reference_end|>"
] | [
3,
9,
10,
13
] | {"<|multi_cite_1_1|>": "ss-2358108", "<|multi_cite_1_2|>": "ss-2303826", "<|multi_cite_1_3|>": "ss-2358109", "<|multi_cite_2_1|>": "ss-788457", "<|multi_cite_2_2|>": "ss-2358110", "<|multi_cite_2_3|>": "ss-871775", "<|multi_cite_2_4|>": "ss-2111868", "<|multi_cite_2_5|>": "ss-2358111", "<|multi_cite_2_6|>": "ss-2358112", "<|multi_cite_2_7|>": "ss-1623671", "<|multi_cite_2_8|>": "ss-2358113", "<|cite_3|>": "arxiv-68957", "<|multi_cite_4_1|>": "ss-1650669", "<|multi_cite_4_2|>": "ss-1175319", "<|multi_cite_4_3|>": "arxiv-583059", "<|cite_5|>": "arxiv-311955", "<|multi_cite_6_1|>": "arxiv-193531", "<|multi_cite_6_2|>": "ss-988983", "<|multi_cite_7_1|>": "arxiv-251379", "<|multi_cite_7_2|>": "arxiv-279198", "<|multi_cite_8_1|>": "ss-1525763", "<|multi_cite_8_2|>": "ss-1544561", "<|cite_9|>": "ss-1006262", "<|multi_cite_10_1|>": "ss-1090976", "<|multi_cite_10_2|>": "arxiv-67721", "<|multi_cite_10_3|>": "arxiv-200901", "<|multi_cite_10_4|>": "arxiv-200573", "<|multi_cite_10_5|>": "ss-1185387", "<|multi_cite_10_6|>": "ss-775192", "<|multi_cite_10_8|>": "arxiv-382050", "<|multi_cite_10_9|>": "arxiv-279198", "<|multi_cite_10_10|>": "arxiv-278770", "<|multi_cite_10_11|>": "ss-1185388", "<|multi_cite_10_12|>": "ss-738644", "<|multi_cite_10_13|>": "ss-1185392", "<|multi_cite_10_14|>": "ss-1475271", "<|cite_11|>": "arxiv-251379", "<|cite_12|>": "arxiv-250305", "<|cite_13|>": "arxiv-279198", "<|cite_14|>": "arxiv-311955", "<|cite_15|>": "arxiv-315922", "<|cite_16|>": "ss-1355970", "<|cite_17|>": "ss-1525763", "<|cite_18|>": "ss-1276892", "<|cite_19|>": "ss-1544561", "<|cite_20|>": "ss-1302821", "<|cite_21|>": "ss-1268489", "<|cite_22|>": "arxiv-185653", "<|cite_23|>": "ss-945419", "<|cite_24|>": "arxiv-193531", "<|cite_25|>": "ss-988983", "<|cite_26|>": "ss-1199732", "<|cite_27|>": "arxiv-336785", "<|cite_28|>": "ss-1171143", "<|cite_29|>": "arxiv-413067", "<|cite_30|>": "arxiv-412339", "<|cite_31|>": "ss-1475275", "<|cite_32|>": "ss-2438763", "<|cite_33|>": "arxiv-74282", "<|cite_34|>": "arxiv-181698", "<|cite_35|>": "arxiv-112389", "<|cite_36|>": "ss-1261246", "<|cite_37|>": "arxiv-340701", "<|cite_38|>": "arxiv-193948", "<|cite_39|>": "arxiv-438133"} |
2205.03777-1 | <|cite_start|> (Reference: Learning Spatial Attention for Face Super-Resolution: General image super-resolution techniques have difficulties in recovering detailed face structures when applying to low resolution face images. Recent deep learning based methods tailored for face images have achieved improved performance by jointly trained with additional task such as face parsing and landmark prediction. However, multi-task learning requires extra manually labeled data. Besides, most of the existing works can only generate relatively low resolution face images (e.g., $128\times128$), and their applications are therefore limited. In this paper, we introduce a novel SPatial Attention Residual Network (SPARNet) built on our newly proposed Face Attention Units (FAUs) for face super-resolution. Specifically, we introduce a spatial attention mechanism to the vanilla residual blocks. This enables the convolutional layers to adaptively bootstrap features related to the key face structures and pay less attention to those less feature-rich regions. This makes the training more effective and efficient as the key face structures only account for a very small portion of the face image. Visualization of the attention maps shows that our spatial attention network can capture the key face structures well even for very low resolution faces (e.g., $16\times16$). Quantitative comparisons on various kinds of metrics (including PSNR, SSIM, identity similarity, and landmark detection) demonstrate the superiority of our method over current state-of-the-arts. We further extend SPARNet with multi-scale discriminators, named as SPARNetHD, to produce high resolution results (i.e., $512\times512$). We show that SPARNetHD trained with synthetic data cannot only produce high quality and high resolution outputs for synthetically degraded face images, but also show good generalization ability to real world low quality face images.) <|cite_end|>integrates the spatial attention mechanism into their framework to improve the representation ability of the network.
WaSRNet <|cite_start|> (Reference: Wavelet-SRNet: A wavelet-based CNN for multi-scale face super resolution: Most modern face super-resolution methods resort to convolutional neural networks (CNN) to infer highresolution (HR) face images. When dealing with very low resolution (LR) images, the performance of these CNN based methods greatly degrades. Meanwhile, these methods tend to produce over-smoothed outputs and miss some textural details. To address these challenges, this paper presents a wavelet-based CNN approach that can ultra-resolve a very low resolution face image of 16 × 16 or smaller pixelsize to its larger version of multiple scaling factors (2×, 4×, 8× and even 16×) in a unified framework. Different from conventional CNN methods directly inferring HR images, our approach firstly learns to predict the LR’s corresponding series of HR’s wavelet coefficients before reconstructing HR images from them. To capture both global topology information and local texture details of human faces, we present a flexible and extensible convolutional neural network with three types of loss: wavelet prediction loss, texture loss and full-image loss. Extensive experiments demonstrate that the proposed approach achieves more appealing results both quantitatively and qualitatively than state-ofthe- art super-resolution methods.) <|cite_end|>transforms the face image domain into the wavelet coefficient domain to preserve more details.
Lu et al. <|cite_start|> (Reference: Global-local fusion network for face super-resolution: ) <|cite_end|>proposed a hybrid approach based on a global upsampling network and a local enhancement network to jointly enhance the facial contours and local details.
The Residual Attribute Attention Network <|cite_start|> (Reference: {r: 秋田県県南部は山間地帯であり,地域柄山の幸に恵まれている.山菜はその時期のみに豊富に採れる自然の産物であり,収穫後一部加工をすることもあるが,多くは旬のうちに大量摂取してしまいがちである.当院でワルファリンカリウム(ワーファリンR)コントロールを行っている患者を外来で管理している医師らは,春から初夏にかけてワーファリンRコントロールが崩れるという印象を持っており,問診によるとほとんどが山菜の摂取が原因であったという. 当病棟の,ワーファリンR服用指導は製薬会社の指導用パンフレットを使用しており,それには,禁食として納豆・クロレラ・青汁,大量摂取を避けるものとして緑黄色野菜が挙げられているものの,海草や山菜に対する記述はない.当地域のように山菜を大量に摂取する地域では,それによるワーファリンRコントロール不良のリスクを最小限にするためには,現在の指導内容では不足であると考えた. そこで,ワーファリンRの作用と拮抗し,ワーファリンRの作用を減弱させると多くの文献で言われているビタミンK の含有量が見た目に分かりやすいように,指導用のパンフレットを作成した.また,ワーファリンR服用開始時に栄養士による食事指導を組み込むことで,ビタミンK を多く含有する食物の摂取方法に対する意識付けを行ったので報告する.) <|cite_end|>employs a multi-block cascaded structure to extract pixel-level representation and semantic-level identification information from LR face images and restores high-resolution images via efficient feature fusion.
The Facial Attribute Capsule Network <|cite_start|> (Reference: Facial Attribute Capsules for Noise Face Super Resolution: Existing face super-resolution (SR) methods mainly assume the input image to be noise-free. Their performance degrades drastically when applied to real-world scenarios where the input image is always contaminated by noise. In this paper, we propose a Facial Attribute Capsules Network (FACN) to deal with the problem of high-scale super-resolution of noisy face image. Capsule is a group of neurons whose activity vector models different properties of the same entity. Inspired by the concept of capsule, we propose an integrated representation model of facial information, which named Facial Attribute Capsule (FAC). In the SR processing, we first generated a group of FACs from the input LR face, and then reconstructed the HR face from this group of FACs. Aiming to effectively improve the robustness of FAC to noise, we generate FAC in semantic, probabilistic and facial attributes manners by means of integrated learning strategy. Each FAC can be divided into two sub-capsules: Semantic Capsule (SC) and Probabilistic Capsule (PC). Them describe an explicit facial attribute in detail from two aspects of semantic representation and probability distribution. The group of FACs model an image as a combination of facial attribute information in the semantic space and probabilistic space by an attribute-disentangling way. The diverse FACs could better combine the face prior information to generate the face images with fine-grained semantic attributes. Extensive benchmark experiments show that our method achieves superior hallucination results and outperforms state-of-the-art for very low resolution (LR) noise face image super resolution.) <|cite_end|>converts the extracted LR face image features into a set of facial attribute capsules by the proposed capsule generation block and utilizes the facial attribute information from both semantic space and probability space to generate the corresponding HR results.
However, since they are trained on synthetic images, these discriminative learning based methods cannot be well generalized to real-world scenarios.
\begin{figure*}[t]
\centering
\begin{overpic}[width=\textwidth]{Imgs_FaceSR/TSNE_1230.pdf}
\put(7.6,20){\textbf{\scriptsize{HR face images}}}
\put(11.4,0.4){\textbf{\footnotesize{(a)}}}
\put(34.4,20){\textbf{\scriptsize{PULSE} <|cite_start|> (Reference: PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models: The primary aim of single-image super-resolution is to construct high-resolution (HR) images from corresponding low-resolution (LR) inputs. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present an algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require supervised training on databases of LR-HR image pairs). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the "downscaling loss," which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee realistic outputs. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show proof of concept of our approach in the domain of face super-resolution (i.e., face hallucination). We also present a discussion of the limitations and biases of the method as currently implemented with an accompanying model card with relevant metrics. Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.) <|cite_end|>}}
\put(36.6,0.4){\textbf{\footnotesize{(b)}}}
\put(59,20){\textbf{\scriptsize{Fully-cycled <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|>}}}
\put(62.4,0.4){\textbf{\footnotesize{(c)}}}
\put(85.2,20){\textbf{\scriptsize{Semi-cycled}}}
\put(87.8,0.4){\textbf{\footnotesize{(d)}}}
\end{overpic}
\caption{\textbf{Distributions of the feature maps extracted by ResNet-101 <|cite_start|> (Reference: Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.) <|cite_end|>from HR and SR face images using t-SNE <|cite_start|> (Reference: Visualizing data using t-sne.: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.) <|cite_end|>}.
(a) Visualization of the HR face images.
(b) Visualization of the SR face images restored by state-of-the-art face SR method PULSE <|cite_start|> (Reference: PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models: The primary aim of single-image super-resolution is to construct high-resolution (HR) images from corresponding low-resolution (LR) inputs. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present an algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require supervised training on databases of LR-HR image pairs). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the "downscaling loss," which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee realistic outputs. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show proof of concept of our approach in the domain of face super-resolution (i.e., face hallucination). We also present a discussion of the limitations and biases of the method as currently implemented with an accompanying model card with relevant metrics. Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.) <|cite_end|>.
(c) Visualization of the SR face images restored by fully-cycled CycleGAN <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|>.
(d) Visualization of the SR face images restored by our semi-cycled SCGAN.
Our semi-cycled architecture better retains the feature maps of the SR face images compared to the fully-cycled CycleGAN.
}
\label{fig:tsne}
\vspace{-3mm}
\end{figure*}
Generative models like Generative Adversarial Networks (GANs) <|cite_start|> (Reference: Generative {{Adversarial Nets}}: CNN and RNN are classifiers for image and speech recognition, and are used in many computer vision. However, this model alone does not produce images...) <|cite_end|>have achieved remarkable progress on face SR <|cite_start|> (Reference: To learn image super-resolution, use a GAN to learn how to do image degradation first: This paper is on image and face super-resolution. The vast majority of prior work for this problem focus on how to increase the resolution of low-resolution images which are artificially generated by simple bilinear down-sampling (or in a few cases by blurring followed by down-sampling).We show that such methods fail to produce good results when applied to real-world low-resolution, low quality images. To circumvent this problem, we propose a two-stage process which firstly trains a High-to-Low Generative Adversarial Network (GAN) to learn how to degrade and downsample high-resolution images requiring, during training, only unpaired high and low-resolution images. Once this is achieved, the output of this network is used to train a Low-to-High GAN for image super-resolution using this time paired low- and high-resolution images. Our main result is that this network can be now used to efectively increase the quality of real-world low-resolution images. We have applied the proposed pipeline for the problem of face super-resolution where we report large improvement over baselines and prior work although the proposed method is potentially applicable to other object categories.) <|cite_end|> <|cite_start|> (Reference: Unpaired Image Super-Resolution using Pseudo-Supervision: In most studies on learning-based image super-resolution (SR), the paired training dataset is created by downscaling high-resolution (HR) images with a predetermined operation (e.g., bicubic). However, these methods fail to super-resolve real-world low-resolution (LR) images, for which the degradation process is much more complicated and unknown. In this paper, we propose an unpaired SR method using a generative adversarial network that does not require a paired/aligned training dataset. Our network consists of an unpaired kernel/noise correction network and a pseudo-paired SR network. The correction network removes noise and adjusts the kernel of the inputted LR image; then, the corrected clean LR image is upscaled by the SR network. In the training phase, the correction network also produces a pseudo-clean LR image from the inputted HR image, and then a mapping from the pseudo-clean LR image to the inputted HR image is learned by the SR network in a paired manner. Because our SR network is independent of the correction network, well-studied existing network architectures and pixel-wise loss functions can be integrated with the proposed framework. Experiments on diverse datasets show that the proposed method is superior to existing solutions to the unpaired SR problem.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Image Super-Resolution with an Indirect Supervised Path: The task of single image super-resolution (SISR) aims at reconstructing a high-resolution (HR) image from a low-resolution (LR) image. Although significant progress has been made by deep learning models, they are trained on synthetic paired data in a supervised way and do not perform well on real data. There are several attempts that directly apply unsupervised image translation models to address such a problem. However, unsupervised low-level vision problem poses more challenge on the accuracy of translation. In this work,we propose a novel framework which is composed of two stages: 1) unsupervised image translation between real LR images and synthetic LR images; 2) supervised super-resolution from approximated real LR images to HR images. It takes the synthetic LR images as a bridge and creates an indirect supervised path from real LR images to HR images. Any existed deep learning based image super-resolution model can be integrated into the second stage of the proposed framework for further improvement. In addition it shows great flexibility in balancing between distortion and perceptual quality under unsupervised setting. The proposed method is evaluated on both NTIRE 2017 and 2018 challenge datasets and achieves favorable performance against supervised methods.) <|cite_end|> <|cite_start|> (Reference: Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution: Deep neural networks have exhibited promising performance in image super-resolution (SR) by learning a nonlinear mapping function from low-resolution (LR) images to high-resolution (HR) images. However, there are two underlying limitations to existing SR methods. First, learning the mapping function from LR to HR images is typically an ill-posed problem, because there exist infinite HR images that can be downsampled to the same LR image. As a result, the space of the possible functions can be extremely large, which makes it hard to find a good solution. Second, the paired LR-HR data may be unavailable in real-world applications and the underlying degradation method is often unknown. For such a more general case, existing SR models often incur the adaptation problem and yield poor performance. To address the above issues, we propose a dual regression scheme by introducing an additional constraint on LR data to reduce the space of the possible functions. Specifically, besides the mapping from LR to HR images, we learn an additional dual regression mapping estimates the down-sampling kernel and reconstruct LR images, which forms a closed-loop to provide additional supervision. More critically, since the dual regression process does not depend on HR images, we can directly learn from LR images. In this sense, we can easily adapt SR models to real-world data, e.g., raw video frames from YouTube. Extensive experiments with paired training data and unpaired real-world data demonstrate our superiority over existing methods.) <|cite_end|> <|cite_start|> (Reference: Deblurring by Realistic Blurring: Existing deep learning methods for image deblurring typically train models using pairs of sharp images and their blurred counterparts. However, synthetically blurring images do not necessarily model the genuine blurring process in real-world scenarios with sufficient accuracy. To address this problem, we propose a new method which combines two GAN models, i.e., a learning-to-Blur GAN (BGAN) and learning-to-DeBlur GAN (DBGAN), in order to learn a better model for image deblurring by primarily learning how to blur images. The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images. In order to reduce the discrepancy between real blur and synthesized blur, a relativistic blur loss is leveraged. As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images. Our experiments show that the proposed method achieves consistently superior quantitative performance as well as higher perceptual quality on both the newly proposed dataset and the public GOPRO dataset.) <|cite_end|>.
URDGN <|cite_start|> (Reference: Ultra-Resolving Face Images by Discriminative Generative Networks: ) <|cite_end|>is among the first work in this direction, but sensitive to the LR face images with large face rotations or poses.
To alleviate this problem, Super-FAN <|cite_start|> (Reference: Super-FAN: Integrated facial landmark localization and super-resolution of real-world low resolution faces in arbitrary poses with GANs: This paper addresses 2 challenging tasks: improving the quality of low resolution facial images and accurately locating the facial landmarks on such poor resolution images. To this end, we make the following 5 contributions: (a) we propose Super-FAN: the very first end-to-end system that addresses both tasks simultaneously, i.e. both improves face resolution and detects the facial landmarks. The novelty or Super-FAN lies in incorporating structural information in a GAN-based super-resolution algorithm via integrating a sub-network for face alignment through heatmap regression and optimizing a novel heatmap loss. (b) We illustrate the benefit of training the two networks jointly by reporting good results not only on frontal images (as in prior work) but on the whole spectrum of facial poses, and not only on synthetic low resolution images (as in prior work) but also on real-world images. (c) We improve upon the state-of-the-art in face super-resolution by proposing a new residual-based architecture. (d) Quantitatively, we show large improvement over the state-of-the-art for both face super-resolution and alignment. (e) Qualitatively, we show for the first time good results on real-world low resolution images.) <|cite_end|>locates the key points of faces via heat map regression to deal with faces in different angles and poses, which needs large-scale annotations of face landmarks for model training.
LRGAN <|cite_start|> (Reference: To learn image super-resolution, use a GAN to learn how to do image degradation first: This paper is on image and face super-resolution. The vast majority of prior work for this problem focus on how to increase the resolution of low-resolution images which are artificially generated by simple bilinear down-sampling (or in a few cases by blurring followed by down-sampling).We show that such methods fail to produce good results when applied to real-world low-resolution, low quality images. To circumvent this problem, we propose a two-stage process which firstly trains a High-to-Low Generative Adversarial Network (GAN) to learn how to degrade and downsample high-resolution images requiring, during training, only unpaired high and low-resolution images. Once this is achieved, the output of this network is used to train a Low-to-High GAN for image super-resolution using this time paired low- and high-resolution images. Our main result is that this network can be now used to efectively increase the quality of real-world low-resolution images. We have applied the proposed pipeline for the problem of face super-resolution where we report large improvement over baselines and prior work although the proposed method is potentially applicable to other object categories.) <|cite_end|>is an unsupervised face SR network by utilizing the architecture of cycle consistency <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|>.
But this method only exploits the consistency within the HR face images while ignoring the consistency within the LR ones.
PULSE <|cite_start|> (Reference: PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models: The primary aim of single-image super-resolution is to construct high-resolution (HR) images from corresponding low-resolution (LR) inputs. In previous approaches, which have generally been supervised, the training objective typically measures a pixel-wise average distance between the super-resolved (SR) and HR images. Optimizing such metrics often leads to blurring, especially in high variance (detailed) regions. We propose an alternative formulation of the super-resolution problem based on creating realistic SR images that downscale correctly. We present an algorithm addressing this problem, PULSE (Photo Upsampling via Latent Space Exploration), which generates high-resolution, realistic images at resolutions previously unseen in the literature. It accomplishes this in an entirely self-supervised fashion and is not confined to a specific degradation operator used during training, unlike previous methods (which require supervised training on databases of LR-HR image pairs). Instead of starting with the LR image and slowly adding detail, PULSE traverses the high-resolution natural image manifold, searching for images that downscale to the original LR image. This is formalized through the "downscaling loss," which guides exploration through the latent space of a generative model. By leveraging properties of high-dimensional Gaussians, we restrict the search space to guarantee realistic outputs. PULSE thereby generates super-resolved images that both are realistic and downscale correctly. We show proof of concept of our approach in the domain of face super-resolution (i.e., face hallucination). We also present a discussion of the limitations and biases of the method as currently implemented with an accompanying model card with relevant metrics. Our method outperforms state-of-the-art methods in perceptual quality at higher resolutions and scale factors than previously possible.) <|cite_end|>often loses spatial information and identity consistency of face images, by randomly sampling the low-dimensional latent codes.
The methods of GLEAN <|cite_start|> (Reference: GLEAN: Generative Latent Bank for Large-Factor Image Super-Resolution: We show that pre-trained Generative Adversarial Networks (GANs), e.g., StyleGAN, can be used as a latent bank to improve the restoration quality of large-factor image super-resolution (SR). While most existing SR approaches attempt to generate realistic textures through learning with adversarial loss, our method, Generative LatEnt bANk (GLEAN), goes beyond existing practices by directly leveraging rich and diverse priors encapsulated in a pre-trained GAN. But unlike prevalent GAN inversion methods that require expensive image-specific optimization at runtime, our approach only needs a single forward pass to generate the upscaled image. GLEAN can be easily incorporated in a simple encoder-bank-decoder architecture with multi-resolution skip connections. Switching the bank allows the method to deal with images from diverse categories, e.g., cat, building, human face, and car. Images upscaled by GLEAN show clear improvements in terms of fidelity and texture faithfulness in comparison to existing methods.) <|cite_end|>, GFPGAN <|cite_start|> (Reference: Towards Real-World Blind Face Restoration with Generative Facial Prior: Blind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details. However, very low-quality inputs cannot offer accurate geometric prior while high-quality references are inaccessible, limiting the applicability in real-world scenarios. In this work, we propose GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration. This Generative Facial Prior (GFP) is incorporated into the face restoration process via novel channel-split spatial feature transform layers, which allow our method to achieve a good balance of realness and fidelity. Thanks to the powerful generative facial prior and delicate designs, our GFP-GAN could jointly restore facial details and enhance colors with just a single forward pass, while GAN inversion methods require expensive image-specific optimization at inference. Extensive experiments show that our method achieves superior performance to prior art on both synthetic and real-world datasets.) <|cite_end|>and GPEN <|cite_start|> (Reference: GAN Prior Embedded Network for Blind Face Restoration in the Wild: Blind face restoration (BFR) from severely degraded face images in the wild is a very challenging problem. Due to the high illness of the problem and the complex unknown degradation, directly training a deep neural network (DNN) usually cannot lead to acceptable results. Existing generative adversarial network (GAN) based methods can produce better results but tend to generate over-smoothed restorations. In this work, we propose a new method by first learning a GAN for high-quality face image generation and embedding it into a U-shaped DNN as a prior decoder, then fine-tuning the GAN prior embedded DNN with a set of synthesized low-quality face images. The GAN blocks are designed to ensure that the latent code and noise input to the GAN can be respectively generated from the deep and shallow features of the DNN, controlling the global face structure, local face details and background of the reconstructed image. The proposed GAN prior embedded network (GPEN) is easy-to-implement, and it can generate visually photo-realistic results. Our experiments demonstrated that the proposed GPEN achieves significantly superior results to state-of-the-art BFR methods both quantitatively and qualitatively, especially for the restoration of severely degraded face images in the wild. The source code and models can be found at https://github.com/yangxy/GPEN.) <|cite_end|>utilize a pre-trained StyleGAN <|cite_start|> (Reference: A Style-Based Generator Architecture for Generative Adversarial Networks: We propose an alternative generator architecture for generative adversarial networks, borrowing from style transfer literature. The new architecture leads to an automatically learned, unsupervised separation of high-level attributes (e.g., pose and identity when trained on human faces) and stochastic variation in the generated images (e.g., freckles, hair), and it enables intuitive, scale-specific control of the synthesis. The new generator improves the state-of-the-art in terms of traditional distribution quality metrics, leads to demonstrably better interpolation properties, and also better disentangles the latent factors of variation. To quantify interpolation quality and disentanglement, we propose two new, automated methods that are applicable to any generator architecture. Finally, we introduce a new, highly varied and high-quality dataset of human faces.) <|cite_end|>model for face SR, but show limited performance on LR face images with severe degradation.
In this work, we propose to learn three forward or backward mappings, \ie, two independent ``learning-to-degrade'' branches and one shared ``learning-to-SR'' branch, which are semi-cycled to maintain the consistency of both the HR and LR face image reconstructions.
\vspace{-2mm}
\subsection{Generative Adversarial Networks}
Generative Adversarial Networks (GANs) <|cite_start|> (Reference: Generative {{Adversarial Nets}}: CNN and RNN are classifiers for image and speech recognition, and are used in many computer vision. However, this model alone does not produce images...) <|cite_end|>have been widely utilized in unsupervised computer vision tasks with great success <|cite_start|> (Reference: InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets: This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.) <|cite_end|> <|cite_start|> (Reference: Conditional Generative Adversarial Nets: Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.) <|cite_end|> <|cite_start|> (Reference: Image-to-Image Translation with Conditional Adversarial Networks: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.) <|cite_end|> <|cite_start|> (Reference: High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs: We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048x1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing/adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.) <|cite_end|> <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|> <|cite_start|> (Reference: DualGAN: Unsupervised Dual Learning for Image-to-Image Translation: Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.) <|cite_end|> <|cite_start|> (Reference: Learning to Discover Cross-Domain Relations with Generative Adversarial Networks: While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. Source code for official implementation is publicly available https://github.com/SKTBrain/DiscoGAN) <|cite_end|> <|cite_start|> (Reference: To learn image super-resolution, use a GAN to learn how to do image degradation first: This paper is on image and face super-resolution. The vast majority of prior work for this problem focus on how to increase the resolution of low-resolution images which are artificially generated by simple bilinear down-sampling (or in a few cases by blurring followed by down-sampling).We show that such methods fail to produce good results when applied to real-world low-resolution, low quality images. To circumvent this problem, we propose a two-stage process which firstly trains a High-to-Low Generative Adversarial Network (GAN) to learn how to degrade and downsample high-resolution images requiring, during training, only unpaired high and low-resolution images. Once this is achieved, the output of this network is used to train a Low-to-High GAN for image super-resolution using this time paired low- and high-resolution images. Our main result is that this network can be now used to efectively increase the quality of real-world low-resolution images. We have applied the proposed pipeline for the problem of face super-resolution where we report large improvement over baselines and prior work although the proposed method is potentially applicable to other object categories.) <|cite_end|> <|cite_start|> (Reference: Unpaired Image Super-Resolution using Pseudo-Supervision: In most studies on learning-based image super-resolution (SR), the paired training dataset is created by downscaling high-resolution (HR) images with a predetermined operation (e.g., bicubic). However, these methods fail to super-resolve real-world low-resolution (LR) images, for which the degradation process is much more complicated and unknown. In this paper, we propose an unpaired SR method using a generative adversarial network that does not require a paired/aligned training dataset. Our network consists of an unpaired kernel/noise correction network and a pseudo-paired SR network. The correction network removes noise and adjusts the kernel of the inputted LR image; then, the corrected clean LR image is upscaled by the SR network. In the training phase, the correction network also produces a pseudo-clean LR image from the inputted HR image, and then a mapping from the pseudo-clean LR image to the inputted HR image is learned by the SR network in a paired manner. Because our SR network is independent of the correction network, well-studied existing network architectures and pixel-wise loss functions can be integrated with the proposed framework. Experiments on diverse datasets show that the proposed method is superior to existing solutions to the unpaired SR problem.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Image Super-Resolution with an Indirect Supervised Path: The task of single image super-resolution (SISR) aims at reconstructing a high-resolution (HR) image from a low-resolution (LR) image. Although significant progress has been made by deep learning models, they are trained on synthetic paired data in a supervised way and do not perform well on real data. There are several attempts that directly apply unsupervised image translation models to address such a problem. However, unsupervised low-level vision problem poses more challenge on the accuracy of translation. In this work,we propose a novel framework which is composed of two stages: 1) unsupervised image translation between real LR images and synthetic LR images; 2) supervised super-resolution from approximated real LR images to HR images. It takes the synthetic LR images as a bridge and creates an indirect supervised path from real LR images to HR images. Any existed deep learning based image super-resolution model can be integrated into the second stage of the proposed framework for further improvement. In addition it shows great flexibility in balancing between distortion and perceptual quality under unsupervised setting. The proposed method is evaluated on both NTIRE 2017 and 2018 challenge datasets and achieves favorable performance against supervised methods.) <|cite_end|> <|cite_start|> (Reference: Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution: Deep neural networks have exhibited promising performance in image super-resolution (SR) by learning a nonlinear mapping function from low-resolution (LR) images to high-resolution (HR) images. However, there are two underlying limitations to existing SR methods. First, learning the mapping function from LR to HR images is typically an ill-posed problem, because there exist infinite HR images that can be downsampled to the same LR image. As a result, the space of the possible functions can be extremely large, which makes it hard to find a good solution. Second, the paired LR-HR data may be unavailable in real-world applications and the underlying degradation method is often unknown. For such a more general case, existing SR models often incur the adaptation problem and yield poor performance. To address the above issues, we propose a dual regression scheme by introducing an additional constraint on LR data to reduce the space of the possible functions. Specifically, besides the mapping from LR to HR images, we learn an additional dual regression mapping estimates the down-sampling kernel and reconstruct LR images, which forms a closed-loop to provide additional supervision. More critically, since the dual regression process does not depend on HR images, we can directly learn from LR images. In this sense, we can easily adapt SR models to real-world data, e.g., raw video frames from YouTube. Extensive experiments with paired training data and unpaired real-world data demonstrate our superiority over existing methods.) <|cite_end|> <|cite_start|> (Reference: Deblurring by Realistic Blurring: Existing deep learning methods for image deblurring typically train models using pairs of sharp images and their blurred counterparts. However, synthetically blurring images do not necessarily model the genuine blurring process in real-world scenarios with sufficient accuracy. To address this problem, we propose a new method which combines two GAN models, i.e., a learning-to-Blur GAN (BGAN) and learning-to-DeBlur GAN (DBGAN), in order to learn a better model for image deblurring by primarily learning how to blur images. The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images. In order to reduce the discrepancy between real blur and synthesized blur, a relativistic blur loss is leveraged. As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images. Our experiments show that the proposed method achieves consistently superior quantitative performance as well as higher perceptual quality on both the newly proposed dataset and the public GOPRO dataset.) <|cite_end|>.
InfoGAN <|cite_start|> (Reference: InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets: This paper describes InfoGAN, an information-theoretic extension to the Generative Adversarial Network that is able to learn disentangled representations in a completely unsupervised manner. InfoGAN is a generative adversarial network that also maximizes the mutual information between a small subset of the latent variables and the observation. We derive a lower bound to the mutual information objective that can be optimized efficiently, and show that our training procedure can be interpreted as a variation of the Wake-Sleep algorithm. Specifically, InfoGAN successfully disentangles writing styles from digit shapes on the MNIST dataset, pose from lighting of 3D rendered images, and background digits from the central digit on the SVHN dataset. It also discovers visual concepts that include hair styles, presence/absence of eyeglasses, and emotions on the CelebA face dataset. Experiments show that InfoGAN learns interpretable representations that are competitive with representations learned by existing fully supervised methods.) <|cite_end|>learns explainable feature representation by decomposing the input noise vector into incompressible noise and latent codes, to control semantic features of the generated images.
Conditional GAN (cGAN) <|cite_start|> (Reference: Conditional Generative Adversarial Nets: Generative Adversarial Nets [8] were recently introduced as a novel way to train generative models. In this work we introduce the conditional version of generative adversarial nets, which can be constructed by simply feeding the data, y, we wish to condition on to both the generator and discriminator. We show that this model can generate MNIST digits conditioned on class labels. We also illustrate how this model could be used to learn a multi-modal model, and provide preliminary examples of an application to image tagging in which we demonstrate how this approach can generate descriptive tags which are not part of training labels.) <|cite_end|>adds to the original GAN an extra training supervision, achieving great success on image translation tasks <|cite_start|> (Reference: Image-to-Image Translation with Conditional Adversarial Networks: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.) <|cite_end|> <|cite_start|> (Reference: High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs: We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs). Conditional GANs have enabled a variety of applications, but the results are often limited to low-resolution and still far from realistic. In this work, we generate 2048x1024 visually appealing results with a novel adversarial loss, as well as new multi-scale generator and discriminator architectures. Furthermore, we extend our framework to interactive visual manipulation with two additional features. First, we incorporate object instance segmentation information, which enables object manipulations such as removing/adding objects and changing the object category. Second, we propose a method to generate diverse results given the same input, allowing users to edit the object appearance interactively. Human opinion studies demonstrate that our method significantly outperforms existing methods, advancing both the quality and the resolution of deep image synthesis and editing.) <|cite_end|>.
With the insight of cycle consistency, the methods of CycleGAN <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|>, DualGAN <|cite_start|> (Reference: DualGAN: Unsupervised Dual Learning for Image-to-Image Translation: Conditional Generative Adversarial Networks (GANs) for cross-domain image-to-image translation have made much progress recently. Depending on the task complexity, thousands to millions of labeled image pairs are needed to train a conditional GAN. However, human labeling is expensive, even impractical, and large quantities of data may not always be available. Inspired by dual learning from natural language translation, we develop a novel dual-GAN mechanism, which enables image translators to be trained from two sets of unlabeled images from two domains. In our architecture, the primal GAN learns to translate images from domain U to those in domain V, while the dual GAN learns to invert the task. The closed loop made by the primal and dual tasks allows images from either domain to be translated and then reconstructed. Hence a loss function that accounts for the reconstruction error of images can be used to train the translators. Experiments on multiple image translation tasks with unlabeled data show considerable performance gain of DualGAN over a single GAN. For some tasks, DualGAN can even achieve comparable or slightly better results than conditional GAN trained on fully labeled data.) <|cite_end|>, and DiscoGAN <|cite_start|> (Reference: Learning to Discover Cross-Domain Relations with Generative Adversarial Networks: While humans easily recognize relations between data from different domains without any supervision, learning to automatically discover them is in general very challenging and needs many ground-truth pairs that illustrate the relations. To avoid costly pairing, we address the task of discovering cross-domain relations given unpaired data. We propose a method based on generative adversarial networks that learns to discover relations between different domains (DiscoGAN). Using the discovered relations, our proposed network successfully transfers style from one domain to another while preserving key attributes such as orientation and face identity. Source code for official implementation is publicly available https://github.com/SKTBrain/DiscoGAN) <|cite_end|>achieve promising performance on image translation tasks.
This insight has also been resorted by many image restoration methods <|cite_start|> (Reference: To learn image super-resolution, use a GAN to learn how to do image degradation first: This paper is on image and face super-resolution. The vast majority of prior work for this problem focus on how to increase the resolution of low-resolution images which are artificially generated by simple bilinear down-sampling (or in a few cases by blurring followed by down-sampling).We show that such methods fail to produce good results when applied to real-world low-resolution, low quality images. To circumvent this problem, we propose a two-stage process which firstly trains a High-to-Low Generative Adversarial Network (GAN) to learn how to degrade and downsample high-resolution images requiring, during training, only unpaired high and low-resolution images. Once this is achieved, the output of this network is used to train a Low-to-High GAN for image super-resolution using this time paired low- and high-resolution images. Our main result is that this network can be now used to efectively increase the quality of real-world low-resolution images. We have applied the proposed pipeline for the problem of face super-resolution where we report large improvement over baselines and prior work although the proposed method is potentially applicable to other object categories.) <|cite_end|> <|cite_start|> (Reference: Unpaired Image Super-Resolution using Pseudo-Supervision: In most studies on learning-based image super-resolution (SR), the paired training dataset is created by downscaling high-resolution (HR) images with a predetermined operation (e.g., bicubic). However, these methods fail to super-resolve real-world low-resolution (LR) images, for which the degradation process is much more complicated and unknown. In this paper, we propose an unpaired SR method using a generative adversarial network that does not require a paired/aligned training dataset. Our network consists of an unpaired kernel/noise correction network and a pseudo-paired SR network. The correction network removes noise and adjusts the kernel of the inputted LR image; then, the corrected clean LR image is upscaled by the SR network. In the training phase, the correction network also produces a pseudo-clean LR image from the inputted HR image, and then a mapping from the pseudo-clean LR image to the inputted HR image is learned by the SR network in a paired manner. Because our SR network is independent of the correction network, well-studied existing network architectures and pixel-wise loss functions can be integrated with the proposed framework. Experiments on diverse datasets show that the proposed method is superior to existing solutions to the unpaired SR problem.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Image Super-Resolution with an Indirect Supervised Path: The task of single image super-resolution (SISR) aims at reconstructing a high-resolution (HR) image from a low-resolution (LR) image. Although significant progress has been made by deep learning models, they are trained on synthetic paired data in a supervised way and do not perform well on real data. There are several attempts that directly apply unsupervised image translation models to address such a problem. However, unsupervised low-level vision problem poses more challenge on the accuracy of translation. In this work,we propose a novel framework which is composed of two stages: 1) unsupervised image translation between real LR images and synthetic LR images; 2) supervised super-resolution from approximated real LR images to HR images. It takes the synthetic LR images as a bridge and creates an indirect supervised path from real LR images to HR images. Any existed deep learning based image super-resolution model can be integrated into the second stage of the proposed framework for further improvement. In addition it shows great flexibility in balancing between distortion and perceptual quality under unsupervised setting. The proposed method is evaluated on both NTIRE 2017 and 2018 challenge datasets and achieves favorable performance against supervised methods.) <|cite_end|> <|cite_start|> (Reference: Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution: Deep neural networks have exhibited promising performance in image super-resolution (SR) by learning a nonlinear mapping function from low-resolution (LR) images to high-resolution (HR) images. However, there are two underlying limitations to existing SR methods. First, learning the mapping function from LR to HR images is typically an ill-posed problem, because there exist infinite HR images that can be downsampled to the same LR image. As a result, the space of the possible functions can be extremely large, which makes it hard to find a good solution. Second, the paired LR-HR data may be unavailable in real-world applications and the underlying degradation method is often unknown. For such a more general case, existing SR models often incur the adaptation problem and yield poor performance. To address the above issues, we propose a dual regression scheme by introducing an additional constraint on LR data to reduce the space of the possible functions. Specifically, besides the mapping from LR to HR images, we learn an additional dual regression mapping estimates the down-sampling kernel and reconstruct LR images, which forms a closed-loop to provide additional supervision. More critically, since the dual regression process does not depend on HR images, we can directly learn from LR images. In this sense, we can easily adapt SR models to real-world data, e.g., raw video frames from YouTube. Extensive experiments with paired training data and unpaired real-world data demonstrate our superiority over existing methods.) <|cite_end|> <|cite_start|> (Reference: Deblurring by Realistic Blurring: Existing deep learning methods for image deblurring typically train models using pairs of sharp images and their blurred counterparts. However, synthetically blurring images do not necessarily model the genuine blurring process in real-world scenarios with sufficient accuracy. To address this problem, we propose a new method which combines two GAN models, i.e., a learning-to-Blur GAN (BGAN) and learning-to-DeBlur GAN (DBGAN), in order to learn a better model for image deblurring by primarily learning how to blur images. The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images. In order to reduce the discrepancy between real blur and synthesized blur, a relativistic blur loss is leveraged. As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images. Our experiments show that the proposed method achieves consistently superior quantitative performance as well as higher perceptual quality on both the newly proposed dataset and the public GOPRO dataset.) <|cite_end|> <|cite_start|> (Reference: Quality Metric Guided Portrait Line Drawing Generation from Unpaired Training Data: Face portrait line drawing is a unique style of art which is highly abstract and expressive. However, due to its high semantic constraints, many existing methods learn to generate portrait drawings using paired training data, which is costly and time-consuming to obtain. In this paper, we propose a novel method to automatically transform face photos to portrait drawings using unpaired training data with two new features; i.e., our method can (1) learn to generate high quality portrait drawings in multiple styles using a single network and (2) generate portrait drawings in a "new style" unseen in the training data. To achieve these benefits, we (1) propose a novel quality metric for portrait drawings which is learned from human perception, and (2) introduce a quality loss to guide the network toward generating better looking portrait drawings. We observe that existing unpaired translation methods such as CycleGAN tend to embed invisible reconstruction information indiscriminately in the whole drawings due to significant information imbalance between the photo and portrait drawing domains, which leads to important facial features missing. To address this problem, we propose a novel asymmetric cycle mapping that enforces the reconstruction information to be visible and only embedded in the selected facial regions. Along with localized discriminators for important facial regions, our method well preserves all important facial features in the generated drawings. Generator dissection further explains that our model learns to incorporate face semantic information during drawing generation. Extensive experiments including a user study show that our model outperforms state-of-the-art methods.) <|cite_end|>.
Among them, LRGAN <|cite_start|> (Reference: To learn image super-resolution, use a GAN to learn how to do image degradation first: This paper is on image and face super-resolution. The vast majority of prior work for this problem focus on how to increase the resolution of low-resolution images which are artificially generated by simple bilinear down-sampling (or in a few cases by blurring followed by down-sampling).We show that such methods fail to produce good results when applied to real-world low-resolution, low quality images. To circumvent this problem, we propose a two-stage process which firstly trains a High-to-Low Generative Adversarial Network (GAN) to learn how to degrade and downsample high-resolution images requiring, during training, only unpaired high and low-resolution images. Once this is achieved, the output of this network is used to train a Low-to-High GAN for image super-resolution using this time paired low- and high-resolution images. Our main result is that this network can be now used to efectively increase the quality of real-world low-resolution images. We have applied the proposed pipeline for the problem of face super-resolution where we report large improvement over baselines and prior work although the proposed method is potentially applicable to other object categories.) <|cite_end|>introduces two cycle-consistent generators <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|>for face SR: a ``learning-to-degrade'' branch for HR image degradation and a ``learning-to-SR'' branch for LR face image super-resolution.
However, the two branches are only coupled for HR face image reconstruction, bringing a potential gap between unpaired LR and HR face images.
In this work, we also exploit the powerful generative capability of CycleGAN <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|>for unsupervised real-world face SR.
Built upon LRGAN <|cite_start|> (Reference: To learn image super-resolution, use a GAN to learn how to do image degradation first: This paper is on image and face super-resolution. The vast majority of prior work for this problem focus on how to increase the resolution of low-resolution images which are artificially generated by simple bilinear down-sampling (or in a few cases by blurring followed by down-sampling).We show that such methods fail to produce good results when applied to real-world low-resolution, low quality images. To circumvent this problem, we propose a two-stage process which firstly trains a High-to-Low Generative Adversarial Network (GAN) to learn how to degrade and downsample high-resolution images requiring, during training, only unpaired high and low-resolution images. Once this is achieved, the output of this network is used to train a Low-to-High GAN for image super-resolution using this time paired low- and high-resolution images. Our main result is that this network can be now used to efectively increase the quality of real-world low-resolution images. We have applied the proposed pipeline for the problem of face super-resolution where we report large improvement over baselines and prior work although the proposed method is potentially applicable to other object categories.) <|cite_end|>, our SCGAN introduces an additional ``learning-to-degrade'' branch to degrade the super-resolved face images, which are supervised by the real-world LR ones.
\vspace{-3mm}
\subsection{Cycle-Consistent Learning}
The framework of cycle-consistent learning has been developed originally for image-to-image translation <|cite_start|> (Reference: Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks: Image-to-image translation is a class of vision and graphics problems where the goal is to learn the mapping between an input image and an output image using a training set of aligned image pairs. However, for many tasks, paired training data will not be available. We present an approach for learning to translate an image from a source domain $X$ to a target domain $Y$ in the absence of paired examples. Our goal is to learn a mapping $G: X \rightarrow Y$ such that the distribution of images from $G(X)$ is indistinguishable from the distribution $Y$ using an adversarial loss. Because this mapping is highly under-constrained, we couple it with an inverse mapping $F: Y \rightarrow X$ and introduce a cycle consistency loss to push $F(G(X)) \approx X$ (and vice versa). Qualitative results are presented on several tasks where paired training data does not exist, including collection style transfer, object transfiguration, season transfer, photo enhancement, etc. Quantitative comparisons against several prior methods demonstrate the superiority of our approach.) <|cite_end|>to jointly learn a paired of coupled branches under the process of backward domain transfer.
From then on, researchers have exploited the cycle-consistent learning framework for many vision tasks such as image restoration <|cite_start|> (Reference: To learn image super-resolution, use a GAN to learn how to do image degradation first: This paper is on image and face super-resolution. The vast majority of prior work for this problem focus on how to increase the resolution of low-resolution images which are artificially generated by simple bilinear down-sampling (or in a few cases by blurring followed by down-sampling).We show that such methods fail to produce good results when applied to real-world low-resolution, low quality images. To circumvent this problem, we propose a two-stage process which firstly trains a High-to-Low Generative Adversarial Network (GAN) to learn how to degrade and downsample high-resolution images requiring, during training, only unpaired high and low-resolution images. Once this is achieved, the output of this network is used to train a Low-to-High GAN for image super-resolution using this time paired low- and high-resolution images. Our main result is that this network can be now used to efectively increase the quality of real-world low-resolution images. We have applied the proposed pipeline for the problem of face super-resolution where we report large improvement over baselines and prior work although the proposed method is potentially applicable to other object categories.) <|cite_end|> <|cite_start|> (Reference: Unpaired Image Super-Resolution using Pseudo-Supervision: In most studies on learning-based image super-resolution (SR), the paired training dataset is created by downscaling high-resolution (HR) images with a predetermined operation (e.g., bicubic). However, these methods fail to super-resolve real-world low-resolution (LR) images, for which the degradation process is much more complicated and unknown. In this paper, we propose an unpaired SR method using a generative adversarial network that does not require a paired/aligned training dataset. Our network consists of an unpaired kernel/noise correction network and a pseudo-paired SR network. The correction network removes noise and adjusts the kernel of the inputted LR image; then, the corrected clean LR image is upscaled by the SR network. In the training phase, the correction network also produces a pseudo-clean LR image from the inputted HR image, and then a mapping from the pseudo-clean LR image to the inputted HR image is learned by the SR network in a paired manner. Because our SR network is independent of the correction network, well-studied existing network architectures and pixel-wise loss functions can be integrated with the proposed framework. Experiments on diverse datasets show that the proposed method is superior to existing solutions to the unpaired SR problem.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Image Super-Resolution with an Indirect Supervised Path: The task of single image super-resolution (SISR) aims at reconstructing a high-resolution (HR) image from a low-resolution (LR) image. Although significant progress has been made by deep learning models, they are trained on synthetic paired data in a supervised way and do not perform well on real data. There are several attempts that directly apply unsupervised image translation models to address such a problem. However, unsupervised low-level vision problem poses more challenge on the accuracy of translation. In this work,we propose a novel framework which is composed of two stages: 1) unsupervised image translation between real LR images and synthetic LR images; 2) supervised super-resolution from approximated real LR images to HR images. It takes the synthetic LR images as a bridge and creates an indirect supervised path from real LR images to HR images. Any existed deep learning based image super-resolution model can be integrated into the second stage of the proposed framework for further improvement. In addition it shows great flexibility in balancing between distortion and perceptual quality under unsupervised setting. The proposed method is evaluated on both NTIRE 2017 and 2018 challenge datasets and achieves favorable performance against supervised methods.) <|cite_end|> <|cite_start|> (Reference: Closed-loop Matters: Dual Regression Networks for Single Image Super-Resolution: Deep neural networks have exhibited promising performance in image super-resolution (SR) by learning a nonlinear mapping function from low-resolution (LR) images to high-resolution (HR) images. However, there are two underlying limitations to existing SR methods. First, learning the mapping function from LR to HR images is typically an ill-posed problem, because there exist infinite HR images that can be downsampled to the same LR image. As a result, the space of the possible functions can be extremely large, which makes it hard to find a good solution. Second, the paired LR-HR data may be unavailable in real-world applications and the underlying degradation method is often unknown. For such a more general case, existing SR models often incur the adaptation problem and yield poor performance. To address the above issues, we propose a dual regression scheme by introducing an additional constraint on LR data to reduce the space of the possible functions. Specifically, besides the mapping from LR to HR images, we learn an additional dual regression mapping estimates the down-sampling kernel and reconstruct LR images, which forms a closed-loop to provide additional supervision. More critically, since the dual regression process does not depend on HR images, we can directly learn from LR images. In this sense, we can easily adapt SR models to real-world data, e.g., raw video frames from YouTube. Extensive experiments with paired training data and unpaired real-world data demonstrate our superiority over existing methods.) <|cite_end|> | [
"<|reference_start|> Global-local fusion network for face super-resolution: <|reference_end|>",
"<|reference_start|> Towards Real-World Blind Face Restoration with Generative Facial Prior: Blind face restoration usually relies on facial priors, such as facial geometry prior or reference prior, to restore realistic and faithful details. However, very low-quality inputs cannot offer accurate geometric prior while high-quality references are inaccessible, limiting the applicability in real-world scenarios. In this work, we propose GFP-GAN that leverages rich and diverse priors encapsulated in a pretrained face GAN for blind face restoration. This Generative Facial Prior (GFP) is incorporated into the face restoration process via novel channel-split spatial feature transform layers, which allow our method to achieve a good balance of realness and fidelity. Thanks to the powerful generative facial prior and delicate designs, our GFP-GAN could jointly restore facial details and enhance colors with just a single forward pass, while GAN inversion methods require expensive image-specific optimization at inference. Extensive experiments show that our method achieves superior performance to prior art on both synthetic and real-world datasets. <|reference_end|>",
"<|reference_start|> GAN Prior Embedded Network for Blind Face Restoration in the Wild: Blind face restoration (BFR) from severely degraded face images in the wild is a very challenging problem. Due to the high illness of the problem and the complex unknown degradation, directly training a deep neural network (DNN) usually cannot lead to acceptable results. Existing generative adversarial network (GAN) based methods can produce better results but tend to generate over-smoothed restorations. In this work, we propose a new method by first learning a GAN for high-quality face image generation and embedding it into a U-shaped DNN as a prior decoder, then fine-tuning the GAN prior embedded DNN with a set of synthesized low-quality face images. The GAN blocks are designed to ensure that the latent code and noise input to the GAN can be respectively generated from the deep and shallow features of the DNN, controlling the global face structure, local face details and background of the reconstructed image. The proposed GAN prior embedded network (GPEN) is easy-to-implement, and it can generate visually photo-realistic results. Our experiments demonstrated that the proposed GPEN achieves significantly superior results to state-of-the-art BFR methods both quantitatively and qualitatively, especially for the restoration of severely degraded face images in the wild. The source code and models can be found at https://github.com/yangxy/GPEN. <|reference_end|>",
"<|reference_start|> Unpaired Image Super-Resolution using Pseudo-Supervision: In most studies on learning-based image super-resolution (SR), the paired training dataset is created by downscaling high-resolution (HR) images with a predetermined operation (e.g., bicubic). However, these methods fail to super-resolve real-world low-resolution (LR) images, for which the degradation process is much more complicated and unknown. In this paper, we propose an unpaired SR method using a generative adversarial network that does not require a paired/aligned training dataset. Our network consists of an unpaired kernel/noise correction network and a pseudo-paired SR network. The correction network removes noise and adjusts the kernel of the inputted LR image; then, the corrected clean LR image is upscaled by the SR network. In the training phase, the correction network also produces a pseudo-clean LR image from the inputted HR image, and then a mapping from the pseudo-clean LR image to the inputted HR image is learned by the SR network in a paired manner. Because our SR network is independent of the correction network, well-studied existing network architectures and pixel-wise loss functions can be integrated with the proposed framework. Experiments on diverse datasets show that the proposed method is superior to existing solutions to the unpaired SR problem. <|reference_end|>"
] | [
2,
23,
24,
35
] | {"<|multi_cite_1_1|>": "ss-2001034", "<|multi_cite_1_2|>": "ss-690310", "<|multi_cite_1_3|>": "ss-1275975", "<|multi_cite_2_1|>": "ss-2468621", "<|multi_cite_2_2|>": "ss-2052460", "<|multi_cite_2_3|>": "ss-1050422", "<|multi_cite_3_1|>": "ss-1254173", "<|multi_cite_3_2|>": "ss-1216410", "<|multi_cite_3_3|>": "ss-711186", "<|multi_cite_4_1|>": "ss-2052461", "<|multi_cite_4_2|>": "ss-1643660", "<|multi_cite_4_3|>": "ss-2468615", "<|multi_cite_4_4|>": "ss-2052462", "<|cite_5|>": "arxiv-340670", "<|cite_6|>": "arxiv-167705", "<|cite_7|>": "arxiv-167705", "<|multi_cite_8_1|>": "ss-735943", "<|multi_cite_8_2|>": "ss-2052463", "<|multi_cite_8_3|>": "arxiv-307234", "<|multi_cite_8_4|>": "ss-1302664", "<|multi_cite_8_5|>": "ss-718994", "<|multi_cite_9_1|>": "arxiv-154859", "<|multi_cite_9_2|>": "arxiv-210136", "<|multi_cite_9_3|>": "arxiv-282182", "<|cite_10|>": "ss-805363", "<|multi_cite_11_1|>": "ss-1353350", "<|multi_cite_11_2|>": "arxiv-154859", "<|multi_cite_11_3|>": "arxiv-290638", "<|multi_cite_11_4|>": "arxiv-252642", "<|multi_cite_11_5|>": "arxiv-314558", "<|multi_cite_11_6|>": "arxiv-340670", "<|multi_cite_12_1|>": "arxiv-167705", "<|multi_cite_12_2|>": "arxiv-250523", "<|multi_cite_12_3|>": "arxiv-253860", "<|multi_cite_12_4|>": "arxiv-257395", "<|cite_13|>": "arxiv-120450", "<|cite_14|>": "arxiv-167705", "<|cite_15|>": "arxiv-120450", "<|cite_16|>": "arxiv-167705", "<|cite_17|>": "arxiv-120450", "<|cite_18|>": "arxiv-120450", "<|multi_cite_19_1|>": "arxiv-120450", "<|multi_cite_19_2|>": "arxiv-167705", "<|cite_20|>": "arxiv-120450", "<|cite_21|>": "arxiv-167705", "<|multi_cite_22_1|>": "arxiv-314455", "<|multi_cite_22_2|>": "ss-1839639", "<|multi_cite_23_1|>": "ss-1254172", "<|multi_cite_23_2|>": "ss-952973", "<|multi_cite_23_3|>": "ss-1050422", "<|multi_cite_23_4|>": "ss-2468621", "<|multi_cite_23_5|>": "ss-2052460", "<|multi_cite_23_6|>": "ss-1338000", "<|cite_24|>": "ss-1254172", "<|cite_25|>": "ss-952973", "<|multi_cite_26_1|>": "ss-1050422", "<|multi_cite_26_2|>": "ss-2468621", "<|multi_cite_26_3|>": "ss-2052460", "<|cite_27|>": "ss-1050422", "<|multi_cite_28_1|>": "ss-1338000", "<|multi_cite_28_2|>": "ss-2052461", "<|multi_cite_28_3|>": "ss-1643660", "<|multi_cite_28_4|>": "ss-2468615", "<|multi_cite_28_5|>": "ss-2052462", "<|multi_cite_29_1|>": "ss-735943", "<|multi_cite_29_2|>": "ss-2052463", "<|multi_cite_29_3|>": "arxiv-307234", "<|multi_cite_29_4|>": "ss-1302664", "<|multi_cite_29_5|>": "ss-718994", "<|cite_30|>": "ss-1302664", "<|cite_31|>": "arxiv-307234", "<|cite_32|>": "ss-718994", "<|cite_33|>": "ss-1980639", "<|cite_34|>": "ss-748013", "<|cite_35|>": "arxiv-248496", "<|cite_36|>": "arxiv-252642", "<|cite_37|>": "arxiv-120450", "<|cite_38|>": "arxiv-88870", "<|cite_39|>": "ss-1099547", "<|cite_40|>": "arxiv-252642", "<|cite_41|>": "arxiv-120450", "<|cite_42|>": "ss-805363", "<|multi_cite_43_1|>": "arxiv-167705", "<|multi_cite_43_2|>": "arxiv-250523", "<|multi_cite_43_3|>": "arxiv-227442", "<|multi_cite_43_4|>": "arxiv-253860", "<|multi_cite_43_5|>": "arxiv-257395", "<|cite_44|>": "ss-1260525", "<|cite_45|>": "arxiv-142550", "<|cite_46|>": "arxiv-167705", "<|cite_47|>": "arxiv-120450", "<|cite_48|>": "arxiv-252642", "<|cite_49|>": "arxiv-307046", "<|cite_50|>": "arxiv-314558", "<|cite_51|>": "arxiv-340670", "<|cite_52|>": "arxiv-184253", "<|cite_53|>": "ss-805363", "<|multi_cite_54_1|>": "arxiv-99905", "<|multi_cite_54_2|>": "arxiv-68418", "<|multi_cite_54_3|>": "arxiv-110679", "<|multi_cite_54_4|>": "arxiv-141840", "<|multi_cite_54_5|>": "arxiv-120450", "<|multi_cite_54_6|>": "arxiv-121199", "<|multi_cite_54_7|>": "arxiv-119159", "<|multi_cite_54_8|>": "arxiv-167705", "<|multi_cite_54_9|>": "arxiv-250523", "<|multi_cite_54_10|>": "arxiv-227442", "<|multi_cite_54_11|>": "arxiv-253860", "<|multi_cite_54_12|>": "arxiv-257395", "<|cite_55|>": "arxiv-99905", "<|cite_56|>": "arxiv-68418", "<|multi_cite_57_1|>": "arxiv-110679", "<|multi_cite_57_2|>": "arxiv-141840", "<|cite_58|>": "arxiv-120450", "<|cite_59|>": "arxiv-121199", "<|cite_60|>": "arxiv-119159", "<|multi_cite_61_1|>": "arxiv-167705", "<|multi_cite_61_2|>": "arxiv-250523", "<|multi_cite_61_3|>": "arxiv-227442", "<|multi_cite_61_4|>": "arxiv-253860", "<|multi_cite_61_5|>": "arxiv-257395", "<|multi_cite_61_6|>": "arxiv-397678", "<|cite_62|>": "arxiv-167705", "<|cite_63|>": "arxiv-120450", "<|cite_64|>": "arxiv-120450", "<|cite_65|>": "arxiv-167705", "<|cite_66|>": "arxiv-120450", "<|multi_cite_67_1|>": "arxiv-167705", "<|multi_cite_67_2|>": "arxiv-250523", "<|multi_cite_67_3|>": "arxiv-227442", "<|multi_cite_67_4|>": "arxiv-253860", "<|multi_cite_67_5|>": "arxiv-257395", "<|multi_cite_68_1|>": "arxiv-250523", "<|multi_cite_68_2|>": "arxiv-227442", "<|cite_69|>": "arxiv-171114", "<|cite_70|>": "ss-940318", "<|cite_71|>": "arxiv-224835", "<|multi_cite_72_1|>": "arxiv-242433", "<|multi_cite_72_2|>": "arxiv-250523", "<|multi_cite_72_3|>": "arxiv-227442", "<|cite_73|>": "ss-1273277", "<|cite_74|>": "arxiv-120450", "<|cite_75|>": "arxiv-253860", "<|cite_76|>": "arxiv-257395", "<|cite_77|>": "arxiv-397678", "<|cite_78|>": "arxiv-88377", "<|cite_79|>": "arxiv-74282", "<|cite_80|>": "ss-1558035", "<|cite_81|>": "arxiv-167705"} |
2406.11086 | <|paper_start|> Title: A Bayesian Drift-Diffusion Model of Schachter-Singer's Two Factor Theory of Emotion
Abstract: A Bayesian Drift-Diffusion Model of Schachter-Singer's Two Factor Theory of Emotion: Bayesian inference has been used in the past to model visual perception (Kersten, Mamassian, & Yuille, 2004), accounting for the Helmholtz principle of perception as "unconscious inference" that is constrained by bottom-up sensory evidence (likelihood) while subject to top-down expectation, priming, or other contextual influences (prior bias); here "unconsciousness" merely relates to the "directness" of perception in the sense of Gibson. Here, we adopt the same Bayesian framework to model emotion process in accordance with Schachter-Singer's Two-Factor theory, which argues that emotion is the outcome of cognitive labeling or attribution of a diffuse pattern of autonomic arousal (Schachter & Singer, 1962). In analogous to visual perception, we conceptualize the emotion process as an instance of Bayesian inference, combining the contextual information with a person's physiological arousal patterns. Drift-diffusion models were constructed to simulate emotional processes, where the decision boundaries correspond to the emotional state experienced by the participants, and boundary-crossing constitutes "labeling" in Schachter-Singer's sense. Our model is tested against experimental data from the Schachter & Singer's study (1962) and the Ross et al. study (1969). Two model scenarios are investigated, in which arousal pattern as one factor is pitted against contextual interaction with an confederate (in Schachter-Singer case) or explicitly instructed mis-attribution (in Ross et al. case) as another factor, mapping onto the Bayesian prior (initial position of the drift) and the likelihood function (evidence accumulation or drift rate). We find that the first scenario (arousal as the prior and context as the likelihood) has a better fit with Schachter & Singer (1962) whereas the second scenario (context as the prior and arousal as the likelihood) has a better fit with Ross et al. (1969).
Introduction
In his "The Principles of Psychology", William James (1922) famously asked "what is an emotion", and focused on physiological arousal as a basis of emotional experience. This arousal theory of emotion subsequently inspired a huge volume of work on cognitive involvement, such as misattribution, appraisal, etc. <|cite_start|> (Reference: Core affect and the psychological construction of emotion.: At the heart of emotion, mood, and any other emotionally charged event are states experienced as simply feeling good or bad, energized or enervated. These states--called core affect--influence reflexes, perception, cognition, and behavior and are influenced by many causes internal and external, but people have no direct access to these causal connections. Core affect can therefore be experienced as free-floating (mood) or can be attributed to some cause (and thereby begin an emotional episode). These basic processes spawn a broad framework that includes perception of the core-affect-altering properties of stimuli, motives, empathy, emotional meta-experience, and affect versus emotion regulation; it accounts for prototypical emotional episodes, such as fear and anger, as core affect attributed to something plus various nonemotional processes.) <|cite_end|>. On the computation modeling side, the Bayesian approach to cognition has gained significant attention in recent years due to its capacity to solve induction and causal inference problems within a probabilistic framework <|cite_start|> (Reference: Two proposals for causal grammars: In the previous chapter (Tenenbaum, Griffiths, & Niyogi, this volume), we introduced a framework for thinking about the structure, function, and acquisition of intuitive theories inspired by an analogy to the research program of generative grammar in linguistics. We argued that a principal function for intuitive theories, just as for grammars for natural languages, is to generate a constrained space of hypotheses that people consider in carrying out a class of cognitively central and otherwise severely underconstrained inductive inference tasks. Linguistic grammars generate a hypothesis space of syntactic structures considered in sentence comprehension; intuitive theories generate a hypothesis space of causal network structures considered in causal induction. Both linguistic grammars and intuitive causal theories must also be reliably learnable from primary data available to people. In our view, these functional characteristics of intuitive theories should strongly constrain the content and form of the knowledge they represent, leading to representations somewhat like those used in generative grammars for language. However, until now we have not presented any specific proposals for formalizing the knowledge content or representational form of “causal grammars.” That is our goal here. Just as linguistic grammars encode the principles that implicitly underlie all grammatical utterances in a language, so do causal grammars express knowledge more abstract than any one causal network in a domain. Consequently, existing approaches for representing causal knowledge based on Bayesian networks defined over observable events, properties or variables, are not sufficient to characterize causal grammars. Causal grammars are in some sense analogous to the “framework theories” for core domains that have been studied in cognitive development (Wellman & Gelman, 1992): the domain-specific concepts and principles that allow learners to construct appropriate causal networks for reasoning about) <|cite_end|> <|cite_start|> (Reference: Bayesian models of cognition: There has been a recent explosion in research applying Bayesian models to cognitive phenomena. This development has resulted from the realization that across a wide variety of tasks the fundamental problem the cognitive system confronts is coping with uncertainty. From visual scene recognition to on-line language comprehension, from categorizing stimuli to determining to what degree an argument is convincing, people must deal with the incompleteness of the information they possess to perform these tasks, many of which have important survival-related consequences. This paper provides a review of Bayesian models of cognition, dividing them up by the different aspects of cognition to which they have been applied. The paper begins with a brief review of Bayesian inference. This falls short of a full technical introduction but the reader is referred to the relevant literature for further details. There follows reviews of Bayesian models in Perception, Categorization, Learning and Causality, Language Processing, Inductive Reasoning, Deductive Reasoning, and Argumentation. In all these areas, it is argued that sophisticated Bayesian models are enhancing our understanding of the underlying cognitive computations involved. It is concluded that a major challenge is to extend the evidential basis for these models, especially to accounts of higher level cognition. WIREs Cogn Sci 2010 1 811-823 For further resources related to this article, please visit the WIREs website.) <|cite_end|>, extending previous applications of Bayesianism to object perception <|cite_start|> (Reference: Object perception as bayesian inference: We perceive the shapes and material properties of objects quickly and reliably despite the complexity and objective ambiguities of natural images. Typical images are highly complex because they consist of many objects embedded in background clutter. Moreover, the image features of an object are extremely variable and ambiguous owing to the effects of projection, occlusion, background clutter, and illumination. The very success of everyday vision implies neural mechanisms, yet to be understood, that discount irrelevant information and organize ambiguous or noisy local image features into objects and surfaces. Recent work in Bayesian theories of visual perception has shown how complexity may be managed and ambiguity resolved through the task-dependent, probabilistic integration of prior object knowledge with image features.) <|cite_end|> and language acquisition <|cite_start|> (Reference: Special Issue : Probabilistic models of cognition Probabilistic models of language processing and acquisition: Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of howhumans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning of theoretical developments and online corpus creation has enabled large models to be tested, revealing probabilistic constraints in processing, undermining acquisition arguments based on a perceived poverty of the stimulus, and suggesting fruitful links with probabilistic theories of categorization and ambiguity resolution in perception.) <|cite_end|>. A growing interest to apply Bayesian modeling to human emotions emerges, especially within the context of Theory of Mind <|cite_start|> (Reference: Formalizing emotion concepts within a Bayesian model of theory of mind.: ) <|cite_end|> <|cite_start|> (Reference: Computational models of emotion inference in Theory of Mind: A review and roadmap: Abstract Research on social cognition has fruitfully applied computational modeling approaches to explain how observers understand and reason about others’ mental states. By contrast, there has been less work on modeling observers’ understanding of emotional states. We propose an intuitive theory framework to studying affective cognition—how humans reason about emotions—and derive a taxonomy of inferences within affective cognition. Using this taxonomy, we review formal computational modeling work on such inferences, including causal reasoning about how others react to events, reasoning about unseen causes of emotions, reasoning with multiple cues, as well as reasoning from emotions to other mental states. In addition, we provide a roadmap for future research by charting out inferences—such as hypothetical and counterfactual reasoning about emotions—that are ripe for future computational modeling work. This framework proposes unifying these various types of reasoning as Bayesian inference within a common “intuitive Theory of Emotion.” Finally, we end with a discussion of important theoretical and methodological challenges that lie ahead in modeling affective cognition.) <|cite_end|>, though most existing works focus on third-person appraisals based on contextual cues instead of on identifying one's own emotion process.
In this paper, after a review of existing emotion theories, we propose a Bayesian formulation that computationally implements the Schachter \& Singer's Two-Factor theory of emotion. We focus on the Two-Factor theory as a conceptually viable framework for emotion computation for two reasons. First, the Two-Factor theory bridges the physiological and cognitive aspects of the emotion process <|cite_start|> (Reference: The Schachter theory of emotion: Two decades later.: Schachter's cognition-arousal theory of emotion is critically examined from both a conceptual and an empirical point of view. Several of the theory's less clearly defined aspects are clarified, and empirical evidence pertaining to three major deductions from the theory is reviewed. It is concluded that only one of these deductions, claiming that misattributed arousal from an extraneous source intensifies emotional reactions, can be considered adequately supported by the data. Little support is found for the second hypothesis, that arousal reduction leads to a reduction in the intensity of emotional state, and the status of the third hypothesis, that misattribution of emotionally induced arousal to a neutral source results in a reduction of emotionality, is considered equivocal because of plausible alternative interpretations of the pertinent findings. Furthermore, it is concluded that there is no convincing evidence for Schachter's claim that arousal is a necessary condition for an emotional state, nor for the suggestion that emotional states may result from a labeling of unexplained arousal. It is suggested that the role of arousal in emotion has been overstated and that the available data support at best a rather attenuated version of Schachter's theory—that is, that arousal feedback can have an intensifying effect on emotional states—and that this arousal-emotion relationship is mediated, in part, by causal attributions regarding the source of arousal.) <|cite_end|>, with a variety of modern appraisal theories focusing on cognitive aspects. Second, we can demonstrate a clear mapping from the two factors in the Two-Factor theory, namely physiological arousal and cognitive label, to the two key components of the Bayesian inference scheme, the prior and the likelihood. This allows us to model first-person emotion recognition as a Bayesian inference process.
We then develop a drift-diffusion model to implement the dynamical Bayesian inference and account for experimental findings in Schachter \& Singer’s study. There, participants who were physiologically aroused (via drug injection but were not informed of arousal) later reported different emotions (i.e., labeled their arousal pattern differently) based on the nature of their interaction with an experimental confederate they encountered post-injection. In our drift-diffusion modeling, the decision boundaries correspond to the euphoric and anger state experienced by the participants in the experiment, and boundary-crossing constitutes “labeling” in Schachter-Singer’s sense. Response time (RT) in the drift-diffusion model is used as a surrogate measure of the self-rated intensity of the emotional state, where high intensity corresponds to a shorter response time. We propose two model scenarios (versions). In the first scenario, arousal pattern is used as the prior and the likelihood function for evidence accumulation models the interaction with the confederate (context). We adopt an unbiased prior, while allowing the drift-rate (and its sign) to capture the nature of interaction with the confederate. In the second scenario, we use the context as the prior and physiological arousal patterns as the likelihood function. The drift-diffusion model is then applied to account for the data of Ross et al. (1969), in which the time-course of the mis-attribution effect was empirically measured. Because the Ross et al. paradigm is one of decision-making under time pressure, we compare it to computation models of decision making with collapsing boundary. Simulation results for both the Schachter \& Singer's study (1962) on context-modulated emotional state labeling and the Ross et al. study (1969) on fear reduction through mis-attribution are presented.
\subsection{The Bayesian Approach to Perception}
\subsubsection{Bayes Formula}
Bayesian models have been increasingly used in cognition for decision making and prediction tasks. Bayesian inferences are based on the simple Bayes rule
\begin{equation}
P(h|e)=\frac{P(e|h)P(h)}{P(e)} = \frac{P(e|h)P(h)}{\sum_h P(e|h)P(h)}
\end{equation}
where $e$ represents an event and $h$ a hypothesis. Here $P(h)$ is called the prior, $P(e|h)$ is the likelihood function, and the resulting $P(h|e)$ is the posterior. In the Bayesian framework, uncertainty in reasoning about variables is reflected by a probability distribution, which is updated upon receiving evidence using probability theory <|cite_start|> (Reference: How cognitive modeling can benefit from hierarchical Bayesian models.: ) <|cite_end|>. The prior (and posterior) refers to the degree of belief in a hypothesis before (and after) observation, while the likelihood function represents the evidence accumulation process. The Bayesian approach has been extensively used in cognitive modeling as it allows agents to update knowledge about the world in a rational way (i.e., with self-consistency) after new data becomes available.
\subsubsection{Bayesian Inference and Perception}
It was Helmholtz who famously drew a distinction between sensation and perception, arguing that perception is an "unconscious inference" process based on sensory stimuli -- when sensory stimuli are registered, the mind draws inference on the reality where experience, context, and expectation play an important role <|cite_start|> (Reference: Was Helmholtz a Bayesian?: Modern developments in machine vision and object recognition have generated renewed interest in the proposal for drawing inferences put forward by the Rev. Thomas Bayes (1701–1759). In this connection the epistemological studies by Hermann Helmholtz (1821–1894) are often cited as laying the foundation of the currently popular move to regard perception as Bayesian inference. Helmholtz in his mature writings tried to reconcile the German idealist notions of reality-as-hypothesis with scientists' quests for the laws of nature, and espoused the view that we “attain knowledge of the lawful order in the realm of the real, but only in so far as it is represented in the tokens within the system of sensory impressions”. His propositions of inferring objects from internal sensory signals by what he called ‘unconscious inferences’ have made Helmholtz be regarded as a proto-Bayesian. But juxtaposing Bayes's original writings, the modern formulation of Bayesian inference, and Helmholtz's views of perception reveals only a tenuous relationship.) <|cite_end|>.
Although Helmholtz's arguments are mostly made in a philosophical context, Bayesian inference was advanced as a computational framework in the study of how the brain can extract 3-D geometric information (e.g., shape) of objects, in a probabilistic way yet with an extraordinary level of accuracy, from 2-D visual inputs on the retina <|cite_start|> (Reference: Object perception as bayesian inference: We perceive the shapes and material properties of objects quickly and reliably despite the complexity and objective ambiguities of natural images. Typical images are highly complex because they consist of many objects embedded in background clutter. Moreover, the image features of an object are extremely variable and ambiguous owing to the effects of projection, occlusion, background clutter, and illumination. The very success of everyday vision implies neural mechanisms, yet to be understood, that discount irrelevant information and organize ambiguous or noisy local image features into objects and surfaces. Recent work in Bayesian theories of visual perception has shown how complexity may be managed and ambiguity resolved through the task-dependent, probabilistic integration of prior object knowledge with image features.) <|cite_end|>. Based on these early successes in Bayesian modeling, emotion theorists have also attempted to apply similar frameworks on emotion perception (e.g. <|cite_start|> (Reference: Computational models of emotion inference in Theory of Mind: A review and roadmap: Abstract Research on social cognition has fruitfully applied computational modeling approaches to explain how observers understand and reason about others’ mental states. By contrast, there has been less work on modeling observers’ understanding of emotional states. We propose an intuitive theory framework to studying affective cognition—how humans reason about emotions—and derive a taxonomy of inferences within affective cognition. Using this taxonomy, we review formal computational modeling work on such inferences, including causal reasoning about how others react to events, reasoning about unseen causes of emotions, reasoning with multiple cues, as well as reasoning from emotions to other mental states. In addition, we provide a roadmap for future research by charting out inferences—such as hypothetical and counterfactual reasoning about emotions—that are ripe for future computational modeling work. This framework proposes unifying these various types of reasoning as Bayesian inference within a common “intuitive Theory of Emotion.” Finally, we end with a discussion of important theoretical and methodological challenges that lie ahead in modeling affective cognition.) <|cite_end|>).
\subsection{Theories and Paradigms of Emotion}
\subsubsection{Earlier theories}
Since the dawn of psychological science, scholars have proposed various models of emotional processes. For example, the James-Lange theory views emotion as the result of physiological arousal <|cite_start|> (Reference: The emotions.: In De Anima A.1, Aristotle developed an account of certain ‘affections of the soul’ such as anger which is his model for other ‘affections and actions common to body and soul’ such as desire and sense perception. His remarks about anger can be understood in two different ways. According to one account, which I call ‘the Pure Form Interpretation’, anger is essentially a compound made up of two definitionally distinct features, one purely psychological (a desire for revenge: its form) and the other physical (the boiling of the blood: its matter), where the latter in some way ‘underlies’ the former. In the other, described as ‘the Impure Form Interpretation’, the type of desire for revenge referred to in the definition of anger (its form) is inseparable in definition from (and not abstractable from) physical features such as, for example, the boiling blood. The type of desire which defines anger is itself defined as a boiling-of-the-blood-(or hot-) desire for revenge. Aristotle’s comments in De Anima A.1 are, it is argued, best understood in line with the Impure Form Interpretation, as defining anger as an inextricably psycho-physical type of desire for revenge, not decomposable into two definitionally separate features, one purely psychological, one purely physical.) <|cite_end|>, or as James puts it, "the feeling of bodily changes as they occur is the emotion". Therefore, emotion is felt when changes in bodily state are perceived. However, people who have lost sensations can still feel emotion and people who feel increased heart rate through exercise may not feel any particular emotion. Thus, physical sensation and emotion are clearly two different processes <|cite_start|> (Reference: Dealing with Feeling: Emotion, Affect, and the Qualitative Research Encounter: Emotion and affect are different, yet intricately interwoven. Emotions such as fear, joy, or sadness are biological in as far as they are physically felt, but they are relational in as far as they are more fully experienced. Affect arises out of the relational quality of emotion—it consists of the myriad ways in which emotions are embodied, expressed, and enacted.
Emotion and affect are influenced by their physical and symbolic contexts. In terms of physical context, data for this article were collected from two different research studies and several sites in the Free State Province of South Africa. Two forms of data were collected: verbal data and images/artworks. In terms of symbolic context, these verbal and visual forms of language and their functioning were explored to generate insights on the social construction of emotion and affect.
Margaret Wetherell’s work provides a theoretical basis for analyzing emotion and affect. Rather than conceptualizing emotion in terms of obscure or esoteric formulations, her “practice-based” approach grounds the study of emotion by examining its manifestation in actions. When taken together, action and practice imply pattern and order, form and function, process and consequence.
Both projects featured in this paper are sensitive studies that stir emotion. This is fertile ground for exploring emotion and affect in participants’ narratives. It is also fertile ground for exploring how emotion and affect may influence the qualitative researcher and the research process itself. Accordingly, this paper offers an additional layer of analysis on the functioning of intersubjectivity, power, emotion, and affect in the research encounter. Concluding insights endorse the practice of mindfulness as a fruitful approach to manage researcher subjectivity in the qualitative research encounter.) <|cite_end|>. This essentially is the Cannon–Bard theory, arguing that physiological reactions and emotional experiences occur simultaneously, but are two independent processes <|cite_start|> (Reference: The James-Lange theory of emotions: a critical examination and an alternative theory. By Walter B. Cannon, 1927.: ) <|cite_end|>. Although the Cannon-Bard theory addressed some of the shortcomings of the James-Lange theory, it is challenged by studies showing that physical reactions can influence emotions <|cite_start|> (Reference: A meta-analysis of the facial feedback literature: Effects of facial feedback on emotional experience are small and variable.: The facial feedback hypothesis suggests that an individual's experience of emotion is influenced by feedback from their facial movements. To evaluate the cumulative evidence for this hypothesis, we conducted a meta-analysis on 286 effect sizes derived from 138 studies that manipulated facial feedback and collected emotion self-reports. Using random effects meta-regression with robust variance estimates, we found that the overall effect of facial feedback was significant but small. Results also indicated that feedback effects are stronger in some circumstances than others. We examined 12 potential moderators, and 3 were associated with differences in effect sizes: (a) Type of emotional outcome: Facial feedback influenced emotional experience (e.g., reported amusement) and, to a greater degree, affective judgments of a stimulus (e.g., the objective funniness of a cartoon). Three publication bias detection methods did not reveal evidence of publication bias in studies examining the effects of facial feedback on emotional experience, but all 3 methods revealed evidence of publication bias in studies examining affective judgments. (b) Presence of emotional stimuli: Facial feedback effects on emotional experience were larger in the absence of emotionally evocative stimuli (e.g., cartoons). (c) Type of stimuli: When participants were presented with emotionally evocative stimuli, facial feedback effects were larger in the presence of some types of stimuli (e.g., emotional sentences) than others (e.g., pictures). The available evidence supports the facial feedback hypothesis' central claim that facial feedback influences emotional experience, although these effects tend to be small and heterogeneous. (PsycINFO Database Record (c) 2019 APA, all rights reserved).) <|cite_end|>.
\subsubsection{Two-Factor theory and its critique}
An influential synthesis of these earlier conceptualizations is Schachter \& Singer's Two-Factor theory of emotion, which posits that emotion is based on two factors: physiological arousal and cognitive label <|cite_start|> (Reference: Cognitive, social, and physiological determinants of emotional state.: The problem of which cues, internal or external, permit a person to label and identify his own emotional state has been with us since the days that James (1890) first tendered his doctrine that "the bodily changes follow directly the perception of the exciting fact, and that our feeling of the same changes as they occur is the emotion" (p. 449). Since we are aware of a variety of feeling and emotion states, it should follow from James' proposition that the various emotions will be accompanied by a variety of differentiable bodily states. Following James' pronouncement, a formidable number of studies were undertaken in search of the physiological differentiators of the emotions. The results, in these early days, were almost uniformly negative. All of the emotional states experi-) <|cite_end|>. When a subject experiences arousal, the subject appraises the context of the arousal patterns, which leads to an experienced emotion state. In comparison to the James-Lange theory, the Two-Factor theory incorporates a cognitive component in the emotional process. In comparison to the Cannon-Bard theory, the intermediary role of cognition is established, i.e., mediating between physiological reaction and emotional experience and, according to Schachter \& Singer (1962), the presence of both autonomic arousal and a cognitive label are necessary for a person to perceive an emotion. The Two-Factor theory is supported by some studies on misattributed arousal and the theory is able to explain the emotion process in many situations. The theory also inspired many more modern variants of the cognition-arousal theory of emotion, which share the central postulate as the Two-Factor theory that emotion is a function of cognition and arousal, although they largely disagree on the ways in which cognition and arousal interact to generate emotion <|cite_start|> (Reference: Varieties of cognition-arousal theory: Three main versions of cognition-arousal theory are distinguished depending on how they interpret the theory’s basic postulate, that an emotion is a function of cognition and arousal: objectivist causal theories, attributional theories, and fusion theories. The objectivist causal and attributional theories each comprise a causal-functional and a part-whole version, and the fusion theory subsumes in particular a categorization and a perceptual integration version. In addition, the attributional version of cognition-arousal theory can be reinterpreted as a theory of emotion self-ascription. Although arousal may in fact not be necessary for emotions, a modified cognition-feeling theory that replaces arousal with intrinsically affective feelings, seems still viable. Arguments are presented why the objectivist causal-functional version of this theory should be preferred.) <|cite_end|>.
However, the necessity of arousal for emotion is still a matter of intense debate <|cite_start|> (Reference: The Schachter theory of emotion: Two decades later.: Schachter's cognition-arousal theory of emotion is critically examined from both a conceptual and an empirical point of view. Several of the theory's less clearly defined aspects are clarified, and empirical evidence pertaining to three major deductions from the theory is reviewed. It is concluded that only one of these deductions, claiming that misattributed arousal from an extraneous source intensifies emotional reactions, can be considered adequately supported by the data. Little support is found for the second hypothesis, that arousal reduction leads to a reduction in the intensity of emotional state, and the status of the third hypothesis, that misattribution of emotionally induced arousal to a neutral source results in a reduction of emotionality, is considered equivocal because of plausible alternative interpretations of the pertinent findings. Furthermore, it is concluded that there is no convincing evidence for Schachter's claim that arousal is a necessary condition for an emotional state, nor for the suggestion that emotional states may result from a labeling of unexplained arousal. It is suggested that the role of arousal in emotion has been overstated and that the available data support at best a rather attenuated version of Schachter's theory—that is, that arousal feedback can have an intensifying effect on emotional states—and that this arousal-emotion relationship is mediated, in part, by causal attributions regarding the source of arousal.) <|cite_end|>. A variant of the Two-Factor theory is the Cognitive Appraisal theory <|cite_start|> (Reference: Emotion and personality: ) <|cite_end|> <|cite_start|> (Reference: Emotion and Adaptation: Part I: BACKGROUND: About emotion Issues of research, classification and measurements Part II: THE COGNITIVE-MOTIVATIONAL-RELATIONAL THEORY: The person-environment relationship: motivation and coping Cognition and emotion Issues of causality Part III: INDIVIDUAL EMOTIONS: Goal incongruent (negative) emotions Goal congruent (positive) and problematic emotions Part IV: EMOTIONAL DEVELOPMENT: Individual development Social influence Part V: PRACTICAL APPLICATIONS: Emotions and health Implications for research, assessment, treatment and disease prevention References Index.) <|cite_end|>, which also posits that emotions are felt due to the appraisals of the situations. Contrary to the Two-Factor account, Lazarus (1966) argues that cognitive appraisal {\it precedes} emotion and physiological arousal. Two types of appraisal were differentiated: in the primary appraisal, the subject appraises the relevance of the situation while in secondary appraisal, the subject evaluates the relevant resources for coping. Although supported by contemporary emotion research <|cite_start|> (Reference: Appraisal theory: Old and new questions: I describe my current thinking on two old questions—the causal role of appraisals and the relationship of appraisal theories to basic emotions theories and constructivist theories, and three (sort of) new questions—the completeness of appraisals, the role of language, and the development of automaticity in emotional responses.) <|cite_end|>, Cognitive Appraisal theory has been criticised for overemphasising the conscious or voluntary processes, as cognitive appraisal might only be one of several ways to produce emotion <|cite_start|> (Reference: Feeling and thinking: Preferences need no inferences.: : Affect is considered by most contemporary theories to be postcognitive, that is, to occur only after considerable cognitive operations have been accomplished. Yet a number of experimental results on preferences, attitudes, impression formation, and de-_ cision making, as well as some clinical phenomena, suggest that affective judgments may be fairly independent of, and precede in time, the sorts of perceptual and cognitive operations commonly assumed to be the basis of these affective judgments. Affective reactions to stimuli are often the very first reactions of the organism, and for lower organisms they are the dominant reactions. Affective reactions can occur without extensive perceptual and cognitive encoding, are made with greater confidence than cognitive judgments, and can be made sooner. Experimental evidence is presented demonstrating that reliable affective discriminations (like-dislike ratings) can be made in the total absence of recognition memory (old-new judgments). Various differences between judgments based on affect and those based on perceptual and cognitive processes are examined. It is concluded that affect and cognition are under the control of separate and partially independent systems that can influence each other in a variety of ways, and that both constitute independent sources of effects in information processing.) <|cite_end|> <|cite_start|> (Reference: Four systems for emotion activation: Cognitive and noncognitive processes.: The significant role of emotions in evolution and adaptation suggests that there must be more than 1 mechanism for generating them. Nevertheless, much of current emotion theory focuses on cognitive processes (appraisal, attribution, and construal) as the sole, or primary, means of eliciting emotions. As an alternative to this position, the present model describes 4 types of emotion-activating systems, 3 of which involve noncognitive information processing. From an evolutionary-developmental perspective, the systems maybe viewed as a loosely organized hierarchical arrangement, with neural systems, the simplest and most rapid, at the base and cognitive systems, the most complex and versatile, at the top. The emotion-activating systems operate under a number of constraints, including genetically influenced individual differences. The hierarchical organization of the systems for generating emotions provides an adaptive advantage.) <|cite_end|>.
While the Two-Factor theory has sparked enormous research interest, including the Cognitive Appraisal theory, the experimental paradigm itself in the Schachter \& Singer famous study <|cite_start|> (Reference: Cognitive, social, and physiological determinants of emotional state.: The problem of which cues, internal or external, permit a person to label and identify his own emotional state has been with us since the days that James (1890) first tendered his doctrine that "the bodily changes follow directly the perception of the exciting fact, and that our feeling of the same changes as they occur is the emotion" (p. 449). Since we are aware of a variety of feeling and emotion states, it should follow from James' proposition that the various emotions will be accompanied by a variety of differentiable bodily states. Following James' pronouncement, a formidable number of studies were undertaken in search of the physiological differentiators of the emotions. The results, in these early days, were almost uniformly negative. All of the emotional states experi-) <|cite_end|>, despite its historical importance, has been quite controversial. Critics have argued that the use of epinephrine seems to cause different physiological reactions among subjects and may not be a reliable manipulation <|cite_start|> (Reference: A critique of determinants of emotional state by Schachter and Singer (1962): The paper by Schachter and Singer (1962) on “determinants of emotional state” is criticized on the grounds that (a) levels of arousal were not the same for the different conditions compared; (b) the placebo groups were consistently not significantly different from the control groups on the various measures of emotional states; (c) the self-report indices were inadequate as measuring instruments; (d) a double-blind procedure was not used; and (e) there is a marked overgeneralization on the basis of very limited samplings of conditions, emotions, arousal states and types of subjects.) <|cite_end|>. The magnitude of the effects in the Schachter \& Singer (1962) study is small and some are not statistically significant, as subsequent replicating studies yield varying success. In their replication study, Marshall \& Zimbardo <|cite_start|> (Reference: Affective consequences of inadequately explained physiological arousal.: ) <|cite_end|> found that the behavior of the confederate has little influence on the subjects. Another replication study by Maslach <|cite_start|> (Reference: Negative emotional biasing of unexplained arousal.: ) <|cite_end|> used hypnotic suggestions to cause the state of arousal instead of using epinephrine, and found uniformly negative experienced emotion in all groups. Finally, the study of Erdmann \& Janke <|cite_start|> (Reference: Interaction between physiological and cognitive determinants of emotions: Experimental studies on Schachter's theory of emotions: ) <|cite_end|> used oral administration instead of injection of ephedrine to induce arousal, and added
an anxiety condition in addition to the euphoric and anger condition in the Schachter \& Singer (1962) study. In the anxiety condition, the subjects were told they would receive electric shocks, and were given mild shocks. Results show that the reported emotion from the euphoric and angry conditions conforms with the two-factor theory but increased arousal did not affect the state of anxiety among subjects.
\subsubsection{Misattribution of arousal paradigm}
Despite the criticism of the Schachter \& Singer (1962) study, it inspired a large body of subsequent research under an alternative "Misattribution of Arousal" paradigm, which provided strong support for the Two-Factor theory <|cite_start|> (Reference: A review of research on Schachter's theory of emotion and the misattribution of arousal: Schachter's two factor theory of emotion and the misattribution of arousal paradigm have been applied to perceptions of euphoria, anger, humour, fear, erotica, discomfort, and love. This paper attempts to review this research and assess both the theory and the misattribution paradigm. The classic Schachter and Singer (1962) study is reviewed, along with criticisms and later attempted replications. Other early research on Schachter's theory is also critqued. The reduction of fear through the misattribution of arousal is examined and its limitations noted. A plausible alternative explanation for many effects of the misattribution paradigm is presented. Research concerning the misattribution of arousal and cognitive dissonance, interpersonal attraction, helping behaviour, and aggression are reviewed and discussed. An overall assessment of Schachter's two factor theory and the misattribution paradigm is also presented. Schachter's (1964a, b) theory is not well supported by the research, but the available evidence has not necessarily disproven the theory either. The misattribution paradigm has proven to be very effective, yet the theoretical basis for this effect is still in doubt. Surprisingly, the most widely cited research is generally of limited value, while little known research has been of much greater significance.) <|cite_end|>. The Misattribution of Arousal paradigm refers to the phenomenon where individuals attribute physiological arousal to an incorrect source. An example is Ross et al. <|cite_start|> (Reference: Toward an attribution therapy: the reduction of fear through induced cognitive-emotional misattribution.: ) <|cite_end|>, which used the misattribution effect to induce fear reduction. The participants were recruited for a learning task to solve a puzzle. If they failed to solve the puzzle, they would receive an electric shock. During the experiment, some background noise was present. When the subjects were told that their fear-related symptoms were due to effects of the background noise, the subjects reported less fear than those who were informed otherwise. In other words, when the experimenter manipulated the subjects' cognitive appraisal, the subjects misattributed their bodily state to fear-neutral sources, causing fear reduction.
\subsection{Bayesian Inference on Emotion Process}
Previous work has applied the Bayesian framework to emotion appraisal, i.e., to the mental models of an agent's emotional states. Baker et al. <|cite_start|> (Reference: Rational quantitative attribution of beliefs, desires and percepts in human mentalizing: ) <|cite_end|> propose a Bayesian Theory of Mind (BToM) model of how humans infer other agents' mental states, such as desires and beliefs, which can be applied to formalize emotion concepts <|cite_start|> (Reference: Formalizing emotion concepts within a Bayesian model of theory of mind.: ) <|cite_end|>. As emotions can cause agents' to display certain behaviors and expressions, an observer can infer others' underlying emotions from observations. The model of <|cite_start|> (Reference: Computational models of emotion inference in Theory of Mind: A review and roadmap: Abstract Research on social cognition has fruitfully applied computational modeling approaches to explain how observers understand and reason about others’ mental states. By contrast, there has been less work on modeling observers’ understanding of emotional states. We propose an intuitive theory framework to studying affective cognition—how humans reason about emotions—and derive a taxonomy of inferences within affective cognition. Using this taxonomy, we review formal computational modeling work on such inferences, including causal reasoning about how others react to events, reasoning about unseen causes of emotions, reasoning with multiple cues, as well as reasoning from emotions to other mental states. In addition, we provide a roadmap for future research by charting out inferences—such as hypothetical and counterfactual reasoning about emotions—that are ripe for future computational modeling work. This framework proposes unifying these various types of reasoning as Bayesian inference within a common “intuitive Theory of Emotion.” Finally, we end with a discussion of important theoretical and methodological challenges that lie ahead in modeling affective cognition.) <|cite_end|> deals with a variety of tasks such as attributing emotional reactions and reasoning about others’ emotion from multiple emotional cues, with rich causal linkage between emotion and events.
These various Bayesian frameworks of emotion inference are centered on emotion appraisals on other agents rather than a direct, subjective perception of one's own emotional state. Our proposed model is hence a very different kind -- we attempt to draw a parallel between visual perception and emotion perception by building on the Two-Factor theory and modeling emotion as a Bayesian inference through which the subject appraises their {\it own} physiological arousal and contexts to produce an emotion label.
\subsubsection{Drift-Diffusion Model}
DDM is popularly used for dynamic information accumulation during perception and decision-making <|cite_start|> (Reference: Perceptual decision making: drift-diffusion model is equivalent to a Bayesian model: Behavioral data obtained with perceptual decision making experiments are typically analyzed with the drift-diffusion model. This parsimonious model accumulates noisy pieces of evidence toward a decision bound to explain the accuracy and reaction times of subjects. Recently, Bayesian models have been proposed to explain how the brain extracts information from noisy input as typically presented in perceptual decision making tasks. It has long been known that the drift-diffusion model is tightly linked with such functional Bayesian models but the precise relationship of the two mechanisms was never made explicit. Using a Bayesian model, we derived the equations which relate parameter values between these models. In practice we show that this equivalence is useful when fitting multi-subject data. We further show that the Bayesian model suggests different decision variables which all predict equal responses and discuss how these may be discriminated based on neural correlates of accumulated evidence. In addition, we discuss extensions to the Bayesian model which would be difficult to derive for the drift-diffusion model. We suggest that these and other extensions may be highly useful for deriving new experiments which test novel hypotheses.) <|cite_end|> <|cite_start|> (Reference: The physics of optimal decision making: a formal analysis of models of performance in two-alternative forced-choice tasks.: In this article, the authors consider optimal decision making in two-alternative forced-choice (TAFC) tasks. They begin by analyzing 6 models of TAFC decision making and show that all but one can be reduced to the drift diffusion model, implementing the statistically optimal algorithm (most accurate for a given speed or fastest for a given accuracy). They prove further that there is always an optimal trade-off between speed and accuracy that maximizes various reward functions, including reward rate (percentage of correct responses per unit time), as well as several other objective functions, including ones weighted for accuracy. They use these findings to address empirical data and make novel predictions about performance under optimality.) <|cite_end|>. In a DDM, which is often implemented with random walk process, the decision-maker accumulates evidence until the relative decision value meets one of the two decision boundaries, and a choice is made corresponding to the boundary being crossed; the corresponding choice is then selected to be the resulting decision <|cite_start|> (Reference: Testing the drift-diffusion model: Significance The drift-diffusion model (DDM) has been widely used in psychology and neuroeconomics to explain observed patterns of choices and response times. This paper provides an identification and characterization theorems for this model: We show that the parameters are uniquely pinned down and determine which datasets are consistent with some form of DDM. We then develop a statistical test of the model based on finite datasets using spline estimation. These results establish the empirical content of the model and provide a way for researchers to see when it is applicable. The drift-diffusion model (DDM) is a model of sequential sampling with diffusion signals, where the decision maker accumulates evidence until the process hits either an upper or lower stopping boundary and then stops and chooses the alternative that corresponds to that boundary. In perceptual tasks, the drift of the process is related to which choice is objectively correct, whereas in consumption tasks, the drift is related to the relative appeal of the alternatives. The simplest version of the DDM assumes that the stopping boundaries are constant over time. More recently, a number of papers have used nonconstant boundaries to better fit the data. This paper provides a statistical test for DDMs with general, nonconstant boundaries. As a by-product, we show that the drift and the boundary are uniquely identified. We use our condition to nonparametrically estimate the drift and the boundary and construct a test statistic based on finite samples.) <|cite_end|>. Starting point (initial bias), boundary separation, and drift rate are parameters of the DDM. Boundary separation is effectively manipulated by changing the step size, and boundary shifts effectively by moving the initial bias. In a DDM simulation, Response Time (RT) refers to the first-exit time of the drift process crossing the predetermined boundary.
Mathematically, the relative decision value $Z_t$ at any given time $t$ is modeled by
\begin{equation}
Z_t=Z_{t-1}+d\cdot B_t
\end{equation}
where $B_t$ is a standard Brownian motion and $d$ is the step size. The boundary crossing (first-exit) time $\tau$ is
\begin{equation}
\tau=\inf_{}\{t\geq 0:|Z_t|\geq1\}
\end{equation} <|paper_end|> | [
"<|reference_start|> Computational models of emotion inference in Theory of Mind: A review and roadmap: Abstract Research on social cognition has fruitfully applied computational modeling approaches to explain how observers understand and reason about others’ mental states. By contrast, there has been less work on modeling observers’ understanding of emotional states. We propose an intuitive theory framework to studying affective cognition—how humans reason about emotions—and derive a taxonomy of inferences within affective cognition. Using this taxonomy, we review formal computational modeling work on such inferences, including causal reasoning about how others react to events, reasoning about unseen causes of emotions, reasoning with multiple cues, as well as reasoning from emotions to other mental states. In addition, we provide a roadmap for future research by charting out inferences—such as hypothetical and counterfactual reasoning about emotions—that are ripe for future computational modeling work. This framework proposes unifying these various types of reasoning as Bayesian inference within a common “intuitive Theory of Emotion.” Finally, we end with a discussion of important theoretical and methodological challenges that lie ahead in modeling affective cognition. <|reference_end|>",
"<|reference_start|> The James-Lange theory of emotions: a critical examination and an alternative theory. By Walter B. Cannon, 1927.: <|reference_end|>",
"<|reference_start|> Interaction between physiological and cognitive determinants of emotions: Experimental studies on Schachter's theory of emotions: <|reference_end|>",
"<|reference_start|> Toward an attribution therapy: the reduction of fear through induced cognitive-emotional misattribution.: <|reference_end|>"
] | [
6,
14,
28,
30
] | {"<|cite_1|>": "ss-1125041", "<|multi_cite_2_1|>": "ss-1576990", "<|multi_cite_2_2|>": "ss-1183772", "<|cite_3|>": "ss-970463", "<|cite_4|>": "ss-1347366", "<|multi_cite_31_1|>": "ss-905226", "<|multi_cite_31_2|>": "ss-905225", "<|cite_5|>": "ss-1857002", "<|cite_6|>": "ss-801585", "<|cite_7|>": "ss-1857003", "<|cite_8|>": "ss-970463", "<|cite_9|>": "ss-905225", "<|cite_10|>": "ss-1857004", "<|cite_11|>": "ss-1857005", "<|cite_12|>": "ss-1736430", "<|cite_13|>": "ss-1857006", "<|cite_14|>": "ss-1196552", "<|multi_cite_15_1|>": "ss-1857007", "<|cite_16|>": "ss-1857002", "<|multi_cite_17_1|>": "ss-863781", "<|multi_cite_17_2|>": "ss-1005953", "<|multi_cite_18_1|>": "ss-1857008", "<|multi_cite_19_1|>": "ss-1155850", "<|multi_cite_19_2|>": "ss-1857009", "<|cite_20|>": "ss-1196552", "<|cite_21|>": "ss-1857010", "<|cite_25|>": "ss-1857013", "<|cite_26|>": "ss-1857014", "<|cite_27|>": "ss-1857015", "<|cite_22|>": "ss-1857011", "<|cite_28|>": "ss-1857016", "<|cite_29|>": "ss-1228069", "<|cite_23|>": "ss-905226", "<|cite_30|>": "ss-905225", "<|multi_cite_32_1|>": "ss-1935611", "<|multi_cite_32_2|>": "ss-857840", "<|cite_24|>": "ss-1857012"} |
2011.12882 | <|paper_start|> Title: Sparse Multi-Decoder Recursive Projection Aggregation for Reed-Muller Codes
Abstract: Sparse Multi-Decoder Recursive Projection Aggregation for Reed-Muller Codes: Reed-Muller (RM) codes are one of the oldest families of codes. Recently, a recursive projection aggregation (RPA) decoder has been proposed, which achieves a performance that is close to the maximum likelihood decoder for short-length RM codes. One of its main drawbacks, however, is the large amount of computations needed. In this paper, we devise a new algorithm to lower the computational budget while keeping a performance close to that of the RPA decoder. The proposed approach consists of multiple sparse RPAs that are generated by performing only a selection of projections in each sparsified decoder. In the end, a cyclic redundancy check (CRC) is used to decide between output codewords. Simulation results show that our proposed approach reduces the RPA decoder's computations up to $80\%$ with negligible performance loss.
Introduction
\label{sec:intro}
Reed-Muller (RM) codes were introduced by Muller <|cite_start|> (Reference: Application of Boolean algebra to switching circuit design and to error detection: A solution is sought to the general problem of simplifying switching circuits that have more than one output. The mathematical treatment of the problem applies only to circuits that may be represented by “polynomials” in Boolean algebra. It is shown that certain parts of the multiple output problem for such circuits may be reduced to a single output problem whose inputs are equal in number to the sum of the numbers of inputs and outputs in the original problem. A particularly simple reduction may be effected in the case of two outputs. Various techniques are described for simplifying Boolean expressions, called “+ polynomials,” in which the operation “exclusive or” appears between terms. The methods described are particularly suitable for use with an automatic computer, and have been tested on the Illiac. An unexpected metric relationship is shown to exist between the members of certain classes of “+ polynomials” called “nets.” This relationship may be used for constructing error-detecting codes, provided the number of bits in the code is a power of two.) <|cite_end|> in 1954. Shortly after, Reed proposed a majority logic decoding algorithm <|cite_start|> (Reference: A class of multiple-error-correcting codes and the decoding scheme: linear error-correcting codes used in communications. (14) I. S. Reed, “A class of multiple-errorcorrecting codes and the decoding scheme,” IRE. Trans. A class of multiple-error-correcting codes and the decoding scheme. more. less. I. Reed · Details · Authors · Fields of science · Bibliography · Quotations · Similar. linear error correcting codes used in communications (2).For bit study is to device a coding scheme which is able to detect and correct such errors (6). (8) Reed, I. S., 'Class of multiple error correcting codes and their decoding scheme'.) <|cite_end|> correcting errors up to half of its minimum distance. Since then, many approaches for decoding RM codes have been investigated. An overcomplete minimum weight parity check matrix is used in <|cite_start|> (Reference: Hard- and soft-decision decoding beyond the half minimum distance---An algorithm for linear codes: A decoding algorithm for linear codes that uses the minimum weight words of the dual code as parity checks is defined. This algorithm is able to correct beyond the half minimum distance and has the capability of including soft-decision decoding. Results on applying this algorithm to quadratic residue (QR) codes, BCH codes, and the Golay codes (with and without soft-decision decoding) are presented.) <|cite_end|> to exploit the redundant code constraints; recursive list decoding in <|cite_start|> (Reference: Recursive decoding and its performance for low-rate Reed-Muller codes: Recursive decoding techniques are considered for Reed-Muller (RM) codes of growing length n and fixed order r. An algorithm is designed that has complexity of order nlogn and corrects most error patterns of weight up to n(1/2-/spl epsiv/) given that /spl epsiv/ exceeds n/sup -1/2r/. This improves the asymptotic bounds known for decoding RM codes with nonexponential complexity. To evaluate decoding capability, we develop a probabilistic technique that disintegrates decoding into a sequence of recursive steps. Although dependent, subsequent outputs can be tightly evaluated under the assumption that all preceding decodings are correct. In turn, this allows us to employ second-order analysis and find the error weights for which the decoding error probability vanishes on the entire sequence of decoding steps as the code length n grows.) <|cite_end|> <|cite_start|> (Reference: Soft-decision decoding of Reed-Muller codes: recursive lists: Recursive list decoding is considered for Reed-Muller (RM) codes. The algorithm repeatedly relegates itself to the shorter RM codes by recalculating the posterior probabilities of their symbols. Intermediate decodings are only performed when these recalculations reach the trivial RM codes. In turn, the updated lists of most plausible codewords are used in subsequent decodings. The algorithm is further improved by using permutation techniques on code positions and by eliminating the most error-prone information bits. Simulation results show that for all RM codes of length 256 and many subcodes of length 512, these algorithms approach maximum-likelihood (ML) performance within a margin of 0.1 dB. As a result, we present tight experimental bounds on ML performance for these codes) <|cite_end|> <|cite_start|> (Reference: Soft-decision decoding of Reed-Muller codes: recursive lists: Recursive list decoding is considered for Reed-Muller (RM) codes. The algorithm repeatedly relegates itself to the shorter RM codes by recalculating the posterior probabilities of their symbols. Intermediate decodings are only performed when these recalculations reach the trivial RM codes. In turn, the updated lists of most plausible codewords are used in subsequent decodings. The algorithm is further improved by using permutation techniques on code positions and by eliminating the most error-prone information bits. Simulation results show that for all RM codes of length 256 and many subcodes of length 512, these algorithms approach maximum-likelihood (ML) performance within a margin of 0.1 dB. As a result, we present tight experimental bounds on ML performance for these codes) <|cite_end|> achieves a performance close to maximum likelihood (ML) decoding with list size at-most $1024$ for short block lengths; and the Sidel'nikov-Pershakov algorithm <|cite_start|> (Reference: Decoding of second order Reed-Muller codes with a large number of errors: Second order Reed-Muller codes are considered over a binary symmetric channel. We present a modified version of V.M Sidel'nikov and A.S. Pershakov algorithm, Problemy Peredachi Informatsii 1992, that has complexity of order n/sup 2/log(n). Experimental results show that the algorithm corrects most error patterns of weight up to n/2(1-e) given that e exceeds n-1/3. This outperforms other decoding algorithms known for RM codes. Decoding performance for known algorithms has been evaluated and the results correspond to asymptotic performance for these algorithms.) <|cite_end|> decodes second-order RM codes of length $ \leq 1024$ by exploiting derivatives of the received codeword and majority voting. Additionally, Sakkour's <|cite_start|> (Reference: IEEE Information Theory Workshop, ITW 2022, Mumbai, India, November 1-9, 2022: ) <|cite_end|> variant of <|cite_start|> (Reference: Decoding of second order Reed-Muller codes with a large number of errors: Second order Reed-Muller codes are considered over a binary symmetric channel. We present a modified version of V.M Sidel'nikov and A.S. Pershakov algorithm, Problemy Peredachi Informatsii 1992, that has complexity of order n/sup 2/log(n). Experimental results show that the algorithm corrects most error patterns of weight up to n/2(1-e) given that e exceeds n-1/3. This outperforms other decoding algorithms known for RM codes. Decoding performance for known algorithms has been evaluated and the results correspond to asymptotic performance for these algorithms.) <|cite_end|> simplifies the majority voting leading to achieving a smaller decoding error probability.
Recently, RM codes have received a great deal of attention. One reason for this surge of interest is their close connection to polar codes, a family of error-correcting codes that provably achieves capacity for any binary-input memoryless symmetric channel <|cite_start|> (Reference: Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels: A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity $I(W)$ of any given binary-input discrete memoryless channel (B-DMC) $W$. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of $N$ independent copies of a given B-DMC $W$, a second set of $N$ binary-input channels $\{W_N^{(i)}:1\le i\le N\}$ such that, as $N$ becomes large, the fraction of indices $i$ for which $I(W_N^{(i)})$ is near 1 approaches $I(W)$ and the fraction for which $I(W_N^{(i)})$ is near 0 approaches $1-I(W)$. The polarized channels $\{W_N^{(i)}\}$ are well-conditioned for channel coding: one need only send data at rate 1 through those with capacity near 1 and at rate 0 through the remaining. Codes constructed on the basis of this idea are called polar codes. The paper proves that, given any B-DMC $W$ with $I(W)>0$ and any target rate $R < I(W)$, there exists a sequence of polar codes $\{{\mathscr C}_n;n\ge 1\}$ such that ${\mathscr C}_n$ has block-length $N=2^n$, rate $\ge R$, and probability of block error under successive cancellation decoding bounded as $P_{e}(N,R) \le \bigoh(N^{-\frac14})$ independently of the code rate. This performance is achievable by encoders and decoders with complexity $O(N\log N)$ for each.) <|cite_end|>. This connection was already mentioned in the seminal paper <|cite_start|> (Reference: Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels: A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity $I(W)$ of any given binary-input discrete memoryless channel (B-DMC) $W$. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of $N$ independent copies of a given B-DMC $W$, a second set of $N$ binary-input channels $\{W_N^{(i)}:1\le i\le N\}$ such that, as $N$ becomes large, the fraction of indices $i$ for which $I(W_N^{(i)})$ is near 1 approaches $I(W)$ and the fraction for which $I(W_N^{(i)})$ is near 0 approaches $1-I(W)$. The polarized channels $\{W_N^{(i)}\}$ are well-conditioned for channel coding: one need only send data at rate 1 through those with capacity near 1 and at rate 0 through the remaining. Codes constructed on the basis of this idea are called polar codes. The paper proves that, given any B-DMC $W$ with $I(W)>0$ and any target rate $R < I(W)$, there exists a sequence of polar codes $\{{\mathscr C}_n;n\ge 1\}$ such that ${\mathscr C}_n$ has block-length $N=2^n$, rate $\ge R$, and probability of block error under successive cancellation decoding bounded as $P_{e}(N,R) \le \bigoh(N^{-\frac14})$ independently of the code rate. This performance is achievable by encoders and decoders with complexity $O(N\log N)$ for each.) <|cite_end|>, and it was exploited to design a family of interpolating codes with improved performance at practical block lengths in <|cite_start|> (Reference: From Polar to Reed-Muller Codes: a Technique to Improve the Finite-Length Performance: We explore the relationship between polar and RM codes and we describe a coding scheme which improves upon the performance of the standard polar code at practical block lengths. Our starting point is the experimental observation that RM codes have a smaller error probability than polar codes under MAP decoding. This motivates us to introduce a family of codes that "interpolates" between RM and polar codes, call this family ${\mathcal C}_{\rm inter} = \{C_{\alpha} : \alpha \in [0, 1]\}$, where $C_{\alpha} \big |_{\alpha = 1}$ is the original polar code, and $C_{\alpha} \big |_{\alpha = 0}$ is an RM code. Based on numerical observations, we remark that the error probability under MAP decoding is an increasing function of $\alpha$. MAP decoding has in general exponential complexity, but empirically the performance of polar codes at finite block lengths is boosted by moving along the family ${\mathcal C}_{\rm inter}$ even under low-complexity decoding schemes such as, for instance, belief propagation or successive cancellation list decoder. We demonstrate the performance gain via numerical simulations for transmission over the erasure channel as well as the Gaussian channel.) <|cite_end|>. Furthermore, it has been shown that, under ML decoding, RM codes achieve capacity on erasure channels <|cite_start|> (Reference: {Reed-Muller: 针对或-符合代数系统中缺失对称变量检测的有效方法等问题,提出了该代数系统基于或-符合运算Reed-Muller展开系数的十二类变量对称性检测算法.该算法通过分析逻辑函数关于变量χi、χj展开的子函数系数矩阵和或-符合运算Reed-Muller展开系数按变量Xi、χi组合分解系数矩阵的对应关系,揭示了任意两变量间各类对称性所满足的分解系数矩阵的约束条件,提出了各类逻辑变量的对称性检测步骤.应用结果表明,与传统方法相比,免去了从逻辑函数的CRM展开式变换为最小项展开式或RM展开式的变换域转换过程,也解决了在该域中图形方法检测的完备性问题,具有简单、直观、完备及适合计算机编程等优点.) <|cite_end|>. Achieving similar results for a more general class of channels is a long-standing conjecture.
In general, ML decoding is not computationally efficient. This has spurred research on finding an efficient algorithm. Many approaches taking advantage of different aspects of RM codes have been exploited recently. In <|cite_start|> (Reference: 2020 IEEE International Symposium on Information Theory (ISIT): ) <|cite_end|>, minimum-weight parity checks are employed taking advantage of the large automorphism group of RM codes. Berlekamp-Welch type algorithms on random errors and erasures are considered and analyzed in <|cite_start|> (Reference: Efficiently Decoding Reed–Muller Codes From Random Errors: Reed–Muller (RM) codes encode an <inline-formula> <tex-math notation="LaTeX">$m$ </tex-math></inline-formula>-variate polynomial of degree at most <inline-formula> <tex-math notation="LaTeX">$r$ </tex-math></inline-formula> by evaluating it on all points in <inline-formula> <tex-math notation="LaTeX">$\{0,1\}^{m}$ </tex-math></inline-formula>. We denote this code by <inline-formula> <tex-math notation="LaTeX">$RM(r,m)$ </tex-math></inline-formula>. The minimum distance of <inline-formula> <tex-math notation="LaTeX">$RM(r,m)$ </tex-math></inline-formula> is <inline-formula> <tex-math notation="LaTeX">$2^{m-r}$ </tex-math></inline-formula> and so it cannot correct more than half that number of errors in the worst case. For random errors one may hope for a better result. In this paper we give an efficient algorithm (in the block length <inline-formula> <tex-math notation="LaTeX">$n=2^{m}$ </tex-math></inline-formula>) for decoding random errors in RM codes far beyond the minimum distance. Specifically, for low-rate codes (of degree <inline-formula> <tex-math notation="LaTeX">$r=o(\sqrt {m})$ </tex-math></inline-formula>), we can correct a random set of <inline-formula> <tex-math notation="LaTeX">$(1/2-o(1))n$ </tex-math></inline-formula> errors with high probability. For high rate codes (of degree <inline-formula> <tex-math notation="LaTeX">$m-r$ </tex-math></inline-formula> for <inline-formula> <tex-math notation="LaTeX">$r=o(\sqrt {m/\log m})$ </tex-math></inline-formula>), we can correct roughly <inline-formula> <tex-math notation="LaTeX">$m^{r/2}$ </tex-math></inline-formula> errors. More generally, for any integer <inline-formula> <tex-math notation="LaTeX">$r$ </tex-math></inline-formula>, our algorithm can correct any error pattern in <inline-formula> <tex-math notation="LaTeX">$RM(m-(2r+2),m)$ </tex-math></inline-formula>, for which the same erasure pattern can be corrected in <inline-formula> <tex-math notation="LaTeX">$RM(m-(r+1),m)$ </tex-math></inline-formula>. The results above are obtained by applying recent results of Abbe, Shpilka, and Wigderson (STOC, 2015) and Kudekar <italic>et al.</italic> (STOC, 2016) regarding the ability of RM codes to correct random erasures. The algorithm is based on solving a carefully defined set of linear equations and thus it is significantly different than other algorithms for decoding RM codes that are based on the recursive structure of the code. It can be seen as a more explicit proof of a result of Abbe <italic>et al.</italic> that shows a reduction from correcting erasures to correcting errors, and it also bares some similarities with the error-locating pair method of Pellikaan, Duursma, and Kötter that generalizes the Berlekamp–Welch algorithm for decoding Reed–Solomon codes.) <|cite_end|> <|cite_start|> (Reference: On the Performance of Reed-Muller Codes with respect to Random Errors and Erasures: This work proves new results on the ability of binary Reed-Muller codes to decode from random errors and erasures. We obtain these results by proving improved bounds on the weight distribution of Reed-Muller codes of high degrees. Specifically, given weight $\beta \in (0,1)$ we prove an upper bound on the number of codewords of relative weight at most $\beta$. We obtain new results in two different settings: for weights $\beta < 1/2$ and for weights that are close to $1/2$.
Our new bounds on the weight distribution imply that RM codes with $m$ variables and degree $\gamma m$, for some explicit constant $\gamma$, achieve capacity for random erasures (i.e. for the binary erasure channel) and for random errors (for the binary symmetric channel). Earlier, it was known that RM codes achieve capacity for the binary symmetric channel for degrees $r = o(m)$. For the binary erasure channel it was known that RM codes achieve capacity for degree $o(m)$ or $r \in [m/2 \pm O(\sqrt{m})]$. Thus, our result provide a new range of parameters for which RM achieve capacity for these two well studied channels. In addition, our results imply that for every $\epsilon > 0$ (in fact we can get up to $\epsilon = \Omega\left(\sqrt{\frac{\log m}{m}}\right)$) RM codes of degree $r<(1/2-\epsilon)m$ can correct a fraction of $1-o(1)$ random erasures with high probability. We also show that, information theoretically, such codes can handle a fraction of $1/2-o(1)$ random errors with high probability. Thus, for example, given noisy evaluations of a degree $0.499m$ polynomial, it is possible to interpolate it even if a random $0.499$ fraction of the evaluations were corrupted, with high probability. While the $o(1)$ terms are not the correct ones to ensure capacity, these results show that RM codes of such degrees are in some sense close to achieving capacity.) <|cite_end|>. Successive cancellation (SC) decoding <|cite_start|> (Reference: Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels: A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity $I(W)$ of any given binary-input discrete memoryless channel (B-DMC) $W$. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of $N$ independent copies of a given B-DMC $W$, a second set of $N$ binary-input channels $\{W_N^{(i)}:1\le i\le N\}$ such that, as $N$ becomes large, the fraction of indices $i$ for which $I(W_N^{(i)})$ is near 1 approaches $I(W)$ and the fraction for which $I(W_N^{(i)})$ is near 0 approaches $1-I(W)$. The polarized channels $\{W_N^{(i)}\}$ are well-conditioned for channel coding: one need only send data at rate 1 through those with capacity near 1 and at rate 0 through the remaining. Codes constructed on the basis of this idea are called polar codes. The paper proves that, given any B-DMC $W$ with $I(W)>0$ and any target rate $R < I(W)$, there exists a sequence of polar codes $\{{\mathscr C}_n;n\ge 1\}$ such that ${\mathscr C}_n$ has block-length $N=2^n$, rate $\ge R$, and probability of block error under successive cancellation decoding bounded as $P_{e}(N,R) \le \bigoh(N^{-\frac14})$ independently of the code rate. This performance is achievable by encoders and decoders with complexity $O(N\log N)$ for each.) <|cite_end|> and SC list (SCL) decoding <|cite_start|> (Reference: List Decoding of Polar Codes: We describe a successive-cancellation \emph{list} decoder for polar codes, which is a generalization of the classic successive-cancellation decoder of Ar{\i}kan. In the proposed list decoder, up to $L$ decoding paths are considered concurrently at each decoding stage. Then, a single codeword is selected from the list as output. If the most likely codeword is selected, simulation results show that the resulting performance is very close to that of a maximum-likelihood decoder, even for moderate values of $L$. Alternatively, if a "genie" is allowed to pick the codeword from the list, the results are comparable to the current state of the art LDPC codes. Luckily, implementing such a helpful genie is easy. Our list decoder doubles the number of decoding paths at each decoding step, and then uses a pruning procedure to discard all but the $L$ "best" paths. %In order to implement this algorithm, we introduce a natural pruning criterion that can be easily evaluated. Nevertheless, a straightforward implementation still requires $\Omega(L \cdot n^2)$ time, which is in stark contrast with the $O(n \log n)$ complexity of the original successive-cancellation decoder. We utilize the structure of polar codes to overcome this problem. Specifically, we devise an efficient, numerically stable, implementation taking only $O(L \cdot n \log n)$ time and $O(L \cdot n)$ space.) <|cite_end|>, initially proposed for polar codes, are also applicable to RM codes as they share a similar construction. An algorithm based on successive factor graph permutations is presented in <|cite_start|> (Reference: Decoding Reed-Muller and Polar Codes by Successive Factor Graph Permutations: Reed-Muller (RM) and polar codes are a class of capacity-achieving channel coding schemes with the same factor graph representation. Low-complexity decoding algorithms fall short in providing a good error-correction performance for RM and polar codes. Using the symmetric group of RM and polar codes, the specific decoding algorithm can be carried out on multiple permutations of the factor graph to boost the error-correction performance. However, this approach results in high decoding complexity. In this paper, we first derive the total number of factor graph permutations on which the decoding can be performed. We further propose a successive permutation (SP) scheme which finds the permutations on the fly, thus the decoding always progresses on a single factor graph permutation. We show that SP can be used to improve the error-correction performance of RM and polar codes under successive-cancellation (SC) and SC list (SCL) decoding, while keeping the memory requirements of the decoders unaltered. Our results for RM and polar codes of length $128$ and rate $0.5$ show that when SP is used and at a target frame error rate of $10^{-4}$, up to $0.5$ dB and $0.1$ dB improvement can be achieved for RM and polar codes respectively.) <|cite_end|>. For a thorough review on RM codes, we refer the reader to <|cite_start|> (Reference: {Reed-Muller: 针对或-符合代数系统中缺失对称变量检测的有效方法等问题,提出了该代数系统基于或-符合运算Reed-Muller展开系数的十二类变量对称性检测算法.该算法通过分析逻辑函数关于变量χi、χj展开的子函数系数矩阵和或-符合运算Reed-Muller展开系数按变量Xi、χi组合分解系数矩阵的对应关系,揭示了任意两变量间各类对称性所满足的分解系数矩阵的约束条件,提出了各类逻辑变量的对称性检测步骤.应用结果表明,与传统方法相比,免去了从逻辑函数的CRM展开式变换为最小项展开式或RM展开式的变换域转换过程,也解决了在该域中图形方法检测的完备性问题,具有简单、直观、完备及适合计算机编程等优点.) <|cite_end|>.
Most recently, a recursive projection aggregation (RPA) <|cite_start|> (Reference: Recursive projection-aggregation decoding of Reed-Muller codes: We propose a new class of efficient decoding algorithms for Reed-Muller (RM) codes over binary-input memoryless channels. The algorithms are based on projecting the code on its cosets, recursively decoding the projected codes (which are lower-order RM codes), and aggregating the reconstructions (e.g., using majority votes). We further provide extensions of the algorithms based on list-decoding algorithms and code concatenation.We run our main algorithm for AWGN channels and Binary Symmetric Channels at the short code length (≤ 1024) and low code rate (≤ 0.5) regime. Simulation results show that the new algorithm not only outperforms the previous decoding algorithms for RM codes, it also outperforms the optimal decoder for polar codes (SCL+CRC) with the same parameters by a wide margin. The performance of the new algorithm for RM codes in those regimes is in fact close to that of the maximal likelihood decoder. Finally, the new decoder naturally allows for parallel implementations.) <|cite_end|> decoder has been proposed. The RPA decoder uses projections of the codeword on its index space cosets in order to obtain valid RM codewords of smaller length. Then, the smaller codewords are recursively decoded and aggregated. RPA decoding performs close to ML decoding up-to length $1024$ for second-order RM codes and also benefits from a parallel implementation. A recursive puncturing aggregation (RXA) decoder is presented in <|cite_start|> (Reference: 2020 IEEE International Symposium on Information Theory (ISIT): ) <|cite_end|>, and it is a modification of RPA decoding using puncturing instead of projections, built for high-rate codes.
In this paper, we focus on the RPA decoding algorithm. The near ML decoding performance benefits of the RPA decoder comes with the need for a large computational budget. We propose an optimized version of RPA decoding, namely a sparse RPA with multiple decoders (SRPA). The new method chooses a subset of recursions for each decoder at random. When combined, these smaller decoders can perform close to RPA decoding while requiring a significantly smaller computational budget.
We compare the proposed SRPA method with RPA decoding on code lengths $\leq 512$. The results show that by using only two sparse decoders, one can achieve almost the same performance at $20\%$ of the computational budget of RPA decoding. Also, the performance of SRPA is compared with SCL decoding while fixing the computational budget. The results indicate that for second-order RM codes, the performance of the proposed SRPA is more stable, staying relatively close to the performance of ML decoding as the block length increases.
The rest of the paper is organized as follows. In Section~\ref{sec:def}, we give the basic definition of RM codes and the description of the RPA decoder. In Section~\ref{sec:problem}, we discuss the problem formulation and the proposed method. In Section~\ref{sec:sim}, we present the simulation results. Finally, in Section~\ref{sec:conc}, we draw the main conclusions of the paper. <|paper_end|> | [
"<|reference_start|> Soft-decision decoding of Reed-Muller codes: recursive lists: Recursive list decoding is considered for Reed-Muller (RM) codes. The algorithm repeatedly relegates itself to the shorter RM codes by recalculating the posterior probabilities of their symbols. Intermediate decodings are only performed when these recalculations reach the trivial RM codes. In turn, the updated lists of most plausible codewords are used in subsequent decodings. The algorithm is further improved by using permutation techniques on code positions and by eliminating the most error-prone information bits. Simulation results show that for all RM codes of length 256 and many subcodes of length 512, these algorithms approach maximum-likelihood (ML) performance within a margin of 0.1 dB. As a result, we present tight experimental bounds on ML performance for these codes <|reference_end|>",
"<|reference_start|> Channel polarization: A method for constructing capacity-achieving codes for symmetric binary-input memoryless channels: A method is proposed, called channel polarization, to construct code sequences that achieve the symmetric capacity $I(W)$ of any given binary-input discrete memoryless channel (B-DMC) $W$. The symmetric capacity is the highest rate achievable subject to using the input letters of the channel with equal probability. Channel polarization refers to the fact that it is possible to synthesize, out of $N$ independent copies of a given B-DMC $W$, a second set of $N$ binary-input channels $\\{W_N^{(i)}:1\\le i\\le N\\}$ such that, as $N$ becomes large, the fraction of indices $i$ for which $I(W_N^{(i)})$ is near 1 approaches $I(W)$ and the fraction for which $I(W_N^{(i)})$ is near 0 approaches $1-I(W)$. The polarized channels $\\{W_N^{(i)}\\}$ are well-conditioned for channel coding: one need only send data at rate 1 through those with capacity near 1 and at rate 0 through the remaining. Codes constructed on the basis of this idea are called polar codes. The paper proves that, given any B-DMC $W$ with $I(W)>0$ and any target rate $R < I(W)$, there exists a sequence of polar codes $\\{{\\mathscr C}_n;n\\ge 1\\}$ such that ${\\mathscr C}_n$ has block-length $N=2^n$, rate $\\ge R$, and probability of block error under successive cancellation decoding bounded as $P_{e}(N,R) \\le \\bigoh(N^{-\\frac14})$ independently of the code rate. This performance is achievable by encoders and decoders with complexity $O(N\\log N)$ for each. <|reference_end|>",
"<|reference_start|> Decoding Reed-Muller and Polar Codes by Successive Factor Graph Permutations: Reed-Muller (RM) and polar codes are a class of capacity-achieving channel coding schemes with the same factor graph representation. Low-complexity decoding algorithms fall short in providing a good error-correction performance for RM and polar codes. Using the symmetric group of RM and polar codes, the specific decoding algorithm can be carried out on multiple permutations of the factor graph to boost the error-correction performance. However, this approach results in high decoding complexity. In this paper, we first derive the total number of factor graph permutations on which the decoding can be performed. We further propose a successive permutation (SP) scheme which finds the permutations on the fly, thus the decoding always progresses on a single factor graph permutation. We show that SP can be used to improve the error-correction performance of RM and polar codes under successive-cancellation (SC) and SC list (SCL) decoding, while keeping the memory requirements of the decoders unaltered. Our results for RM and polar codes of length $128$ and rate $0.5$ show that when SP is used and at a target frame error rate of $10^{-4}$, up to $0.5$ dB and $0.1$ dB improvement can be achieved for RM and polar codes respectively. <|reference_end|>",
"<|reference_start|> {Reed-Muller: 针对或-符合代数系统中缺失对称变量检测的有效方法等问题,提出了该代数系统基于或-符合运算Reed-Muller展开系数的十二类变量对称性检测算法.该算法通过分析逻辑函数关于变量χi、χj展开的子函数系数矩阵和或-符合运算Reed-Muller展开系数按变量Xi、χi组合分解系数矩阵的对应关系,揭示了任意两变量间各类对称性所满足的分解系数矩阵的约束条件,提出了各类逻辑变量的对称性检测步骤.应用结果表明,与传统方法相比,免去了从逻辑函数的CRM展开式变换为最小项展开式或RM展开式的变换域转换过程,也解决了在该域中图形方法检测的完备性问题,具有简单、直观、完备及适合计算机编程等优点. <|reference_end|>"
] | [
4,
10,
18,
19
] | {"<|cite_1|>": "ss-677959", "<|cite_2|>": "ss-677960", "<|cite_3|>": "ss-909069", "<|multi_cite_4_1|>": "ss-1317156", "<|multi_cite_4_2|>": "ss-909067", "<|multi_cite_4_3|>": "ss-909067", "<|cite_5|>": "ss-1317157", "<|cite_6|>": "ss-2491407", "<|cite_7|>": "ss-1317157", "<|cite_8|>": "arxiv-4409", "<|cite_9|>": "arxiv-4409", "<|cite_10|>": "arxiv-55190", "<|cite_11|>": "ss-937382", "<|cite_12|>": "ss-909828", "<|multi_cite_13_1|>": "ss-1317158", "<|multi_cite_13_2|>": "ss-1317159", "<|cite_14|>": "arxiv-4409", "<|cite_15|>": "arxiv-32280", "<|cite_16|>": "arxiv-165488", "<|cite_17|>": "ss-937382", "<|cite_18|>": "ss-1317160", "<|cite_19|>": "ss-909828"} |
2408.14757 | <|paper_start|> Title: Learning effective pruning at initialization from iterative pruning
Abstract: Learning effective pruning at initialization from iterative pruning: Pruning at initialization (PaI) reduces training costs by removing weights before training, which becomes increasingly crucial with the growing network size. However, current PaI methods still have a large accuracy gap with iterative pruning, especially at high sparsity levels. This raises an intriguing question: can we get inspiration from iterative pruning to improve the PaI performance? In the lottery ticket hypothesis, the iterative rewind pruning (IRP) finds subnetworks retroactively by rewinding the parameter to the original initialization in every pruning iteration, which means all the subnetworks are based on the initial state. Here, we hypothesise the surviving subnetworks are more important and bridge the initial feature and their surviving score as the PaI criterion. We employ an end-to-end neural network (\textbf{AutoS}parse) to learn this correlation, input the model's initial features, output their score and then prune the lowest score parameters before training. To validate the accuracy and generalization of our method, we performed PaI across various models. Results show that our approach outperforms existing methods in high-sparsity settings. Notably, as the underlying logic of model pruning is consistent in different models, only one-time IRP on one model is needed (e.g., once IRP on ResNet-18/CIFAR-10, AutoS can be generalized to VGG-16/CIFAR-10, ResNet-18/TinyImageNet, et al.). As the first neural network-based PaI method, we conduct extensive experiments to validate the factors influencing this approach. These results reveal the learning tendencies of neural networks and provide new insights into our understanding and research of PaI from a practical perspective. Our code is available at: https://github.com/ChengYaofeng/AutoSparse.git.
Introduction
\label{intro}
Neural network pruning, a technique employed for several decades <|cite_start|> (Reference: Optimal {Brain} {Damage}: We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.) <|cite_end|> <|cite_start|> (Reference: {Pruning Algorithms-A Survey: A rule of thumb for obtaining good generalization in systems trained by examples is that one should use the smallest system that will fit the data. Unfortunately, it usually is not obvious what size is best; a system that is too small will not be able to learn the data while one that is just big enough may learn very slowly and be very sensitive to initial conditions and learning parameters. This paper is a survey of neural network pruning algorithms. The approach taken by the methods described here is to train a network that is larger than necessary and then remove the parts that are not needed.) <|cite_end|> <|cite_start|> (Reference: What is the State of Neural Network Pruning?: Neural network pruning---the task of reducing the size of a network by removing parameters---has been the subject of a great deal of work in recent years. We provide a meta-analysis of the literature, including an overview of approaches to pruning and consistent findings in the literature. After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades. To address this situation, we identify issues with current practices, suggest concrete remedies, and introduce ShrinkBench, an open-source framework to facilitate standardized evaluations of pruning methods. We use ShrinkBench to compare various pruning techniques and show that its comprehensive evaluation can prevent common pitfalls when comparing pruning methods.) <|cite_end|>, involves selectively removing non-essential parameters from a network. This process maintains the network's inference accuracy while reducing the computational demands. This is particularly beneficial in resource-constrained environments such as mobile devices as it allows for faster response times and reduces energy consumption. While model pruning is traditionally performed after training to improve inference speed without compromising accuracy <|cite_start|> (Reference: The State of Sparsity in Deep Neural Networks: We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet. Across thousands of experiments, we demonstrate that complex techniques (Molchanov et al., 2017; Louizos et al., 2017b) shown to yield high compression rates on smaller datasets perform inconsistently, and that simple magnitude pruning approaches achieve comparable or better results. Additionally, we replicate the experiments performed by (Frankle & Carbin, 2018) and (Liu et al., 2018) at scale and show that unstructured sparse architectures learned through pruning cannot be trained from scratch to the same test set performance as a model trained with joint sparsification and optimization. Together, these results highlight the need for large-scale benchmarks in the field of model compression. We open-source our code, top performing model checkpoints, and results of all hyperparameter configurations to establish rigorous baselines for future work on compression and sparsification.) <|cite_end|> <|cite_start|> (Reference: To prune, or not to prune: exploring the efficacy of pruning for model compression: Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.) <|cite_end|> <|cite_start|> (Reference: Learning both Weights and Connections for Efficient Neural
Network: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.) <|cite_end|>, the growing number of parameters in neural network models has led to a significant increase in training resource consumption <|cite_start|> (Reference: Language Models are Few-Shot Learners: Recent work has demonstrated substantial gains on many NLP tasks and benchmarks by pre-training on a large corpus of text followed by fine-tuning on a specific task. While typically task-agnostic in architecture, this method still requires task-specific fine-tuning datasets of thousands or tens of thousands of examples. By contrast, humans can generally perform a new language task from only a few examples or from simple instructions - something which current NLP systems still largely struggle to do. Here we show that scaling up language models greatly improves task-agnostic, few-shot performance, sometimes even reaching competitiveness with prior state-of-the-art fine-tuning approaches. Specifically, we train GPT-3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its performance in the few-shot setting. For all tasks, GPT-3 is applied without any gradient updates or fine-tuning, with tasks and few-shot demonstrations specified purely via text interaction with the model. GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks, as well as several tasks that require on-the-fly reasoning or domain adaptation, such as unscrambling words, using a novel word in a sentence, or performing 3-digit arithmetic. At the same time, we also identify some datasets where GPT-3's few-shot learning still struggles, as well as some datasets where GPT-3 faces methodological issues related to training on large web corpora. Finally, we find that GPT-3 can generate samples of news articles which human evaluators have difficulty distinguishing from articles written by humans. We discuss broader societal impacts of this finding and of GPT-3 in general.) <|cite_end|>. This shift has spurred interest in strategies for pruning networks at the early stages of training.
Recently, the Lottery Ticket Hypothesis (LTH) <|cite_start|> (Reference: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.) <|cite_end|> has revealed that it is possible to identify efficient subnetworks early in the training process. These subnetworks can achieve accuracy levels comparable to those achieved after full training. Identifying these subnetworks early on could drastically reduce training time and resource usage, providing an efficient pathway to model optimization. Therefore, some approaches <|cite_start|> (Reference: SNIP: Single-shot Network Pruning based on Connection Sensitivity: Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task.) <|cite_end|> <|cite_start|> (Reference: Picking Winning Tickets Before Training by Preserving Gradient Flow: Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels.) <|cite_end|> <|cite_start|> (Reference: Pruning neural networks without any data by iteratively conserving synaptic flow: Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time. Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable subnetworks at initialization. This raises a foundational question: can we identify highly sparse trainable subnetworks at initialization, without ever training, or indeed without ever looking at the data? We provide an affirmative answer to this question through theory driven algorithm design. We first mathematically formulate and experimentally verify a conservation law that explains why existing gradient-based pruning algorithms at initialization suffer from layer-collapse, the premature pruning of an entire layer rendering a network untrainable. This theory also elucidates how layer-collapse can be entirely avoided, motivating a novel pruning algorithm Iterative Synaptic Flow Pruning (SynFlow). This algorithm can be interpreted as preserving the total flow of synaptic strengths through the network at initialization subject to a sparsity constraint. Notably, this algorithm makes no reference to the training data and consistently competes with or outperforms existing state-of-the-art pruning algorithms at initialization over a range of models (VGG and ResNet), datasets (CIFAR-10/100 and Tiny ImageNet), and sparsity constraints (up to 99.99 percent). Thus our data-agnostic pruning algorithm challenges the existing paradigm that, at initialization, data must be used to quantify which synapses are important.) <|cite_end|> <|cite_start|> (Reference: REVISITING PRUNING AT INITIALIZATION THROUGH THE LENS OF RAMANUJAN GRAPH: Pruning neural networks at initialization (PaI) has received an upsurge of interest due to its end-to-end saving potential. PaI is able to find sparse subnetworks at initialization that can achieve comparable performance to the full networks. These methods can surpass the trivial baseline of random pruning but suffer from a significant performance gap compared to post-training pruning. Previous approaches firmly rely on weights, gradients, and sanity checks as primary signals when conducting PaI analysis. To better understand the underlying mechanism of PaI, we propose to interpret it through the lens of the Ramanujan Graph - a class of expander graphs that are sparse while being highly connected. It is often believed there should be a strong correlation between the Ramanujan graph and PaI since both are about finding sparse and well-connected neural networks. However, the finer-grained link relating highly sparse and connected networks to their relative performance ( i.e. , ranking of difference sparse structures at the same specific global sparsity) is still missing. We observe that not only the Ramanujan property for sparse networks shows no significant relationship to PaI’s relative performance, but maximizing it can also lead to the formation of pseudo-random graphs with no structural meanings. We reveal the underlying cause to be Ramanujan Graph’s strong assumption on the upper bound of the largest nontrivial eigenvalue ( ˆ µ ) of layers belonging to highly sparse networks. We hence propose Iterative Mean Difference of Bound (IMDB) as a mean to relax the ˆ µ upper bound. Likewise, we also show there exists a lower bound for ˆ µ , which we call the Normalized Random Coefficient (NaRC), that gives us an accurate assessment for when sparse but highly connected) <|cite_end|> explore pruning at initialization (PaI), direct remove weights before training by assessing parameter importance using features before training. However, a comparative study <|cite_start|> (Reference: Pruning Neural Networks at Initialization: Why are We Missing the Mark?: Recent work has explored the possibility of pruning neural networks at initialization. We assess proposals for doing so: SNIP (Lee et al., 2019), GraSP (Wang et al., 2020), SynFlow (Tanaka et al., 2020), and magnitude pruning. Although these methods surpass the trivial baseline of random pruning, they remain below the accuracy of magnitude pruning after training, and we endeavor to understand why. We show that, unlike pruning after training, randomly shuffling the weights these methods prune within each layer or sampling new initial values preserves or improves accuracy. As such, the per-weight pruning decisions made by these methods can be replaced by a per-layer choice of the fraction of weights to prune. This property suggests broader challenges with the underlying pruning heuristics, the desire to prune at initialization, or both.) <|cite_end|> has shown that these PaI methods, using handcrafted criteria to prune, often do not match the performance of traditional iterative pruning, particularly at high sparsity.
The iterative pruning process of LTH <|cite_start|> (Reference: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.) <|cite_end|> involves one significant step: rewind, which reinits the pruning subnetwork to its original initialization and then starts the next pruning iteration. We refer to this pruning method as Iteration Rewind Pruning (IRP), which leads to two key insights: a) All parameters are changed back to their initial values before further training, indicating that the subnetwork is closely related to their initial state. b) Each pruning cycle aims to find the best subnetwork based on the surviving parameters, relatively less important parameters are progressively pruned, resulting in the retention of parameters that represent a higher level of importance.
This motivated us to use the parameter's surviving iteration as parameter scores and investigate the correlations between this and their initialized features. We experiment with the IRP using LeNet-300-100 on MNIST and analyse the results. The initialized parameters and their scores are visualized in Fig.1. It seems the magnitude of the parameters does not have an intuitive correlation with their importance. We further investigate this importance using other PaI criteria (e.g. SNIP --- connection sensitivity and GraSP --- gradient flow) in Fig.1. The results are inconsistent. The above raises intriguing research questions: Does this surviving score from iterative pruning make sense? Due to the unintuitive correlation, how to build the relation initial feature with this score?
In this paper, we propose \textbf{AutoS}parse, a data-driven framework to predict the above scores before training, which automatically learns PaI criteria from iterative pruning. The network takes initial features, such as initial parameters and initial gradients on the dataset as inputs, and outputs the scores of parameters. Parameters with the lowest scores are then pruned according to the desired sparsity level. Comprehensive experiments evaluate the effectiveness and generalization of our method. Surprisingly, we find the results outperform recent state-of-the-art methods. Notably, although we need iterative pruning to create a dataset, this only needs once. Similar to existing pruning methods <|cite_start|> (Reference: SNIP: Single-shot Network Pruning based on Connection Sensitivity: Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task.) <|cite_end|> <|cite_start|> (Reference: Picking Winning Tickets Before Training by Preserving Gradient Flow: Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels.) <|cite_end|> one criterion can be applied to all models, and our experiments demonstrate the ont-time RIP from one model can teach the PaI criterion for all the models. As the first data-driven criterion for PaI, we conduct extensive experiments across various aspects (e.g., datasets, inputs, models) to explore its effectiveness. Different from previous theoretical methods, these findings reveal the learning tendencies of neural networks, which advance the understanding and further exploration of PaI from a practical perspective. Our contribution can be summarized as follows:
\begin{itemize}
\item{We propose a novel PaI parameter importance criterion through iterative rewind pruning and investigate its characteristics, highlighting the complexity of designing such criteria manually.}
\item{Based on the importance criterion, a novel PaI approach (AutoS) is proposed, utilizing an end-to-end neural network to determine which parameter to prune. This approach introduces a transformative way of thinking in the field by shifting from traditional human-designed criteria approaches to data-driven, automated pruning.}
\item{Extensive experiments demonstrate that AutoS can achieve high PaI accuracy, outperforming other baselines. This also shows that data-driven PaI methods can achieve high performance.}
\item{As the first data-driven PaI criterion, comprehensive ablations and analyses evaluate the factors influencing this approach and the results advance the understanding and exploration of PaI.}
\end{itemize}
Related Work
Neural network pruning encompasses a variety of methodologies characterized by diverse approaches <|cite_start|> (Reference: Dimensionality reduced training by pruning and freezing parts of a deep neural network: a survey: ) <|cite_end|> such as structured <|cite_start|> (Reference: Rethinking the Value of Network Pruning: Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization.) <|cite_end|> or unstructured pruning <|cite_start|> (Reference: Pruning filters for efficient convnets,: The success of CNNs in various applications is accompanied by a significant increase in the computation and parameter storage costs. Recent efforts toward reducing these overheads involve pruning and compressing the weights of various layers without hurting original accuracy. However, magnitude-based pruning of weights reduces a significant number of parameters from the fully connected layers and may not adequately reduce the computation costs in the convolutional layers due to irregular sparsity in the pruned networks. We present an acceleration method for CNNs, where we prune filters from CNNs that are identified as having a small effect on the output accuracy. By removing whole filters in the network together with their connecting feature maps, the computation costs are reduced significantly. In contrast to pruning weights, this approach does not result in sparse connectivity patterns. Hence, it does not need the support of sparse convolution libraries and can work with existing efficient BLAS libraries for dense matrix multiplications. We show that even simple filter pruning techniques can reduce inference costs for VGG-16 by up to 34% and ResNet-110 by up to 38% on CIFAR10 while regaining close to the original accuracy by retraining the networks.) <|cite_end|> <|cite_start|> (Reference: Exploring the granularity of sparsity in convolutional neural networks: Sparsity helps reducing the computation complexity of DNNs by skipping the multiplication with zeros. The granularity of sparsity affects the efficiency of hardware architecture and the prediction accuracy. In this paper we quantitatively measure the accuracy-sparsity relationship with different granularity. Coarse-grained sparsity brings more regular sparsity pattern, making it easier for hardware acceleration, and our experimental results show that coarsegrained sparsity have very small impact on the sparsity ratio given no loss of accuracy. Moreover, due to the index saving effect, coarse-grained sparsity is able to obtain similar or even better compression rates than fine-grained sparsity at the same accuracy threshold. Our analysis, which is based on the framework of a recent sparse convolutional neural network (SCNN) accelerator, further demonstrates that it saves 30% – 35% of memory references compared with fine-grained sparsity.) <|cite_end|>, global <|cite_start|> (Reference: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.) <|cite_end|> <|cite_start|> (Reference: Picking Winning Tickets Before Training by Preserving Gradient Flow: Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels.) <|cite_end|> or layer-wise pruning <|cite_start|> (Reference: Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science: Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erd\H{o}s-R\'enyi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.) <|cite_end|> <|cite_start|> (Reference: Rigging the Lottery: Making All Tickets Winners: Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-to-sparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static. Code used in our work can be found in github.com/google-research/rigl.) <|cite_end|>, and differences in pruning frequency (pruning at initialization <|cite_start|> (Reference: SNIP: Single-shot Network Pruning based on Connection Sensitivity: Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task.) <|cite_end|> or iterative pruning <|cite_start|> (Reference: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.) <|cite_end|> <|cite_start|> (Reference: Pruning neural networks without any data by iteratively conserving synaptic flow: Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time. Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable subnetworks at initialization. This raises a foundational question: can we identify highly sparse trainable subnetworks at initialization, without ever training, or indeed without ever looking at the data? We provide an affirmative answer to this question through theory driven algorithm design. We first mathematically formulate and experimentally verify a conservation law that explains why existing gradient-based pruning algorithms at initialization suffer from layer-collapse, the premature pruning of an entire layer rendering a network untrainable. This theory also elucidates how layer-collapse can be entirely avoided, motivating a novel pruning algorithm Iterative Synaptic Flow Pruning (SynFlow). This algorithm can be interpreted as preserving the total flow of synaptic strengths through the network at initialization subject to a sparsity constraint. Notably, this algorithm makes no reference to the training data and consistently competes with or outperforms existing state-of-the-art pruning algorithms at initialization over a range of models (VGG and ResNet), datasets (CIFAR-10/100 and Tiny ImageNet), and sparsity constraints (up to 99.99 percent). Thus our data-agnostic pruning algorithm challenges the existing paradigm that, at initialization, data must be used to quantify which synapses are important.) <|cite_end|>). To enhance the clarity of this study, our analysis specifically concentrates on the following two parts.
\paragraph{After training \& Before training}
Most existing pruning methods assign scores to parameters after training, removing those with the lowest scores <|cite_start|> (Reference: Learning to Prune Deep Neural Networks via Layer-wise Optimal Brain Surgeon: How to develop slim and accurate deep neural networks has become crucial for real- world applications, especially for those employed in embedded systems. Though previous work along this research line has shown some promising results, most existing methods either fail to significantly compress a well-trained deep network or require a heavy retraining process for the pruned deep network to re-boost its prediction performance. In this paper, we propose a new layer-wise pruning method for deep neural networks. In our proposed method, parameters of each individual layer are pruned independently based on second order derivatives of a layer-wise error function with respect to the corresponding parameters. We prove that the final prediction performance drop after pruning is bounded by a linear combination of the reconstructed errors caused at each layer. Therefore, there is a guarantee that one only needs to perform a light retraining process on the pruned network to resume its original prediction performance. We conduct extensive experiments on benchmark datasets to demonstrate the effectiveness of our pruning method compared with several state-of-the-art baseline methods.) <|cite_end|> <|cite_start|> (Reference: Pruning Convolutional Neural Networks for Resource Efficient Inference: We propose a new formulation for pruning convolutional kernels in neural networks to enable efficient inference. We interleave greedy criteria-based pruning with fine-tuning by backpropagation - a computationally efficient procedure that maintains good generalization in the pruned network. We propose a new criterion based on Taylor expansion that approximates the change in the cost function induced by pruning network parameters. We focus on transfer learning, where large pretrained networks are adapted to specialized tasks. The proposed criterion demonstrates superior performance compared to other criteria, e.g. the norm of kernel weights or feature map activation, for pruning large CNNs after adaptation to fine-grained classification tasks (Birds-200 and Flowers-102) relaying only on the first order gradient information. We also show that pruning can lead to more than 10x theoretical (5x practical) reduction in adapted 3D-convolutional filters with a small drop in accuracy in a recurrent gesture classifier. Finally, we show results for the large-scale ImageNet dataset to emphasize the flexibility of our approach.) <|cite_end|>. This approach is effective as parameter values stabilize after training, simplifying the assessment of their importance. The criteria typically include parameter magnitudes <|cite_start|> (Reference: Learning both Weights and Connections for Efficient Neural
Network: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.) <|cite_end|>, impact on loss <|cite_start|> (Reference: Optimal {Brain} {Damage}: We have used information-theoretic ideas to derive a class of practical and nearly optimal schemes for adapting the size of a neural network. By removing unimportant weights from a network, several improvements can be expected: better generalization, fewer training examples required, and improved speed of learning and/or classification. The basic idea is to use second-derivative information to make a tradeoff between network complexity and training set error. Experiments confirm the usefulness of the methods on a real-world application.) <|cite_end|>, and various complex coefficient <|cite_start|> (Reference: EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis: Reducing the test time resource requirements of a neural network while preserving test accuracy is crucial for running inference on resource-constrained devices. To achieve this goal, we introduce a novel network reparameterization based on the Kronecker-factored eigenbasis (KFE), and then apply Hessian-based structured pruning methods in this basis. As opposed to existing Hessian-based pruning algorithms which do pruning in parameter coordinates, our method works in the KFE where different weights are approximately independent, enabling accurate pruning and fast computation. We demonstrate empirically the effectiveness of the proposed method through extensive experiments. In particular, we highlight that the improvements are especially significant for more challenging datasets and networks. With negligible loss of accuracy, an iterative-pruning version gives a 10$\times$ reduction in model size and a 8$\times$ reduction in FLOPs on wide ResNet32.) <|cite_end|> <|cite_start|> (Reference: NISP: Pruning Networks using Neuron Importance Score Propagation: To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the "final response layer" (FRL), which is the second-to-last layer before classification, for a pruned network to retrain its predictive power. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, and formulate network pruning as a binary integer optimization problem and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and then fine-tuned to retain its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.) <|cite_end|> <|cite_start|> (Reference: Dynamic Network Surgery for Efficient DNNs: Deep learning has become a ubiquitous technology to improve machine intelligence. However, most of the existing deep models are structurally very complex, making them difficult to be deployed on the mobile platforms with limited computational power. In this paper, we propose a novel network compression method called dynamic network surgery, which can remarkably reduce the network complexity by making on-the-fly connection pruning. Unlike the previous methods which accomplish this task in a greedy way, we properly incorporate connection splicing into the whole process to avoid incorrect pruning and make it as a continual network maintenance. The effectiveness of our method is proved with experiments. Without any accuracy loss, our method can efficiently compress the number of parameters in LeNet-5 and AlexNet by a factor of $\bm{108}\times$ and $\bm{17.7}\times$ respectively, proving that it outperforms the recent pruning method by considerable margins. Code and some models are available at https://github.com/yiwenguo/Dynamic-Network-Surgery.) <|cite_end|>. However, these algorithms primarily enhance inference efficiency without reducing the computational demands during training. The LTH <|cite_start|> (Reference: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.) <|cite_end|> demonstrates that certain subnetworks, identifiable before training, can match the performance of a dense model. Consequently, recent research has focused on developing criteria to effectively identify these subnetworks at initialization. This work <|cite_start|> (Reference: The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training: Random pruning is arguably the most naive way to attain sparsity in neural networks, but has been deemed uncompetitive by either post-training pruning or sparse training. In this paper, we focus on sparse training and highlight a perhaps counter-intuitive finding, that random pruning at initialization can be quite powerful for the sparse training of modern neural networks. Without any delicate pruning criteria or carefully pursued sparsity structures, we empirically demonstrate that sparsely training a randomly pruned network from scratch can match the performance of its dense equivalent. There are two key factors that contribute to this revival: (i) the network sizes matter: as the original dense networks grow wider and deeper, the performance of training a randomly pruned sparse network will quickly grow to matching that of its dense equivalent, even at high sparsity ratios; (ii) appropriate layer-wise sparsity ratios can be pre-chosen for sparse training, which shows to be another important performance booster. Simple as it looks, a randomly pruned subnetwork of Wide ResNet-50 can be sparsely trained to outperforming a dense Wide ResNet-50, on ImageNet. We also observed such randomly pruned networks outperform dense counterparts in other favorable aspects, such as out-of-distribution detection, uncertainty estimation, and adversarial robustness. Overall, our results strongly suggest there is larger-than-expected room for sparse training at scale, and the benefits of sparsity might be more universal beyond carefully designed pruning. Our source code can be found at https://github.com/VITA-Group/Random_Pruning.) <|cite_end|> evaluates that random pruning is effective when the network size and layer-wise sparsity ratios are appropriate. Several methods propose more universal criteria to predict parameter scores before training, such as connection sensitivity in SNIP <|cite_start|> (Reference: SNIP: Single-shot Network Pruning based on Connection Sensitivity: Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task.) <|cite_end|>, gradient flow in GraSP <|cite_start|> (Reference: Picking Winning Tickets Before Training by Preserving Gradient Flow: Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels.) <|cite_end|>, and synaptic sensitivity in Synflow <|cite_start|> (Reference: Pruning neural networks without any data by iteratively conserving synaptic flow: Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time. Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable subnetworks at initialization. This raises a foundational question: can we identify highly sparse trainable subnetworks at initialization, without ever training, or indeed without ever looking at the data? We provide an affirmative answer to this question through theory driven algorithm design. We first mathematically formulate and experimentally verify a conservation law that explains why existing gradient-based pruning algorithms at initialization suffer from layer-collapse, the premature pruning of an entire layer rendering a network untrainable. This theory also elucidates how layer-collapse can be entirely avoided, motivating a novel pruning algorithm Iterative Synaptic Flow Pruning (SynFlow). This algorithm can be interpreted as preserving the total flow of synaptic strengths through the network at initialization subject to a sparsity constraint. Notably, this algorithm makes no reference to the training data and consistently competes with or outperforms existing state-of-the-art pruning algorithms at initialization over a range of models (VGG and ResNet), datasets (CIFAR-10/100 and Tiny ImageNet), and sparsity constraints (up to 99.99 percent). Thus our data-agnostic pruning algorithm challenges the existing paradigm that, at initialization, data must be used to quantify which synapses are important.) <|cite_end|>. Additionally, recent findings suggest that the Ramanujan Graph criterion is also beneficial for pruning before training <|cite_start|> (Reference: REVISITING PRUNING AT INITIALIZATION THROUGH THE LENS OF RAMANUJAN GRAPH: Pruning neural networks at initialization (PaI) has received an upsurge of interest due to its end-to-end saving potential. PaI is able to find sparse subnetworks at initialization that can achieve comparable performance to the full networks. These methods can surpass the trivial baseline of random pruning but suffer from a significant performance gap compared to post-training pruning. Previous approaches firmly rely on weights, gradients, and sanity checks as primary signals when conducting PaI analysis. To better understand the underlying mechanism of PaI, we propose to interpret it through the lens of the Ramanujan Graph - a class of expander graphs that are sparse while being highly connected. It is often believed there should be a strong correlation between the Ramanujan graph and PaI since both are about finding sparse and well-connected neural networks. However, the finer-grained link relating highly sparse and connected networks to their relative performance ( i.e. , ranking of difference sparse structures at the same specific global sparsity) is still missing. We observe that not only the Ramanujan property for sparse networks shows no significant relationship to PaI’s relative performance, but maximizing it can also lead to the formation of pseudo-random graphs with no structural meanings. We reveal the underlying cause to be Ramanujan Graph’s strong assumption on the upper bound of the largest nontrivial eigenvalue ( ˆ µ ) of layers belonging to highly sparse networks. We hence propose Iterative Mean Difference of Bound (IMDB) as a mean to relax the ˆ µ upper bound. Likewise, we also show there exists a lower bound for ˆ µ , which we call the Normalized Random Coefficient (NaRC), that gives us an accurate assessment for when sparse but highly connected) <|cite_end|>. This work focuses on pruning before training.
\paragraph{Pruning at Initialization \& Iterative Pruning}
The LTH <|cite_start|> (Reference: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.) <|cite_end|> introduces a classical iterative pruning process that involves training, pruning, and rewinding. This cycle is repeated until an optimal subnetwork is identified, characterized by a mask that signifies an efficient subnetwork specific to the initialized model and dataset. Despite its high accuracy, iterative pruning is resource-intensive, prompting recent research toward less costly alternatives, such as PaI. Techniques such as SNIP <|cite_start|> (Reference: SNIP: Single-shot Network Pruning based on Connection Sensitivity: Pruning large neural networks while maintaining their performance is often desirable due to the reduced space and time complexity. In existing methods, pruning is done within an iterative optimization procedure with either heuristically designed pruning schedules or additional hyperparameters, undermining their utility. In this work, we present a new approach that prunes a given network once at initialization prior to training. To achieve this, we introduce a saliency criterion based on connection sensitivity that identifies structurally important connections in the network for the given task. This eliminates the need for both pretraining and the complex pruning schedule while making it robust to architecture variations. After pruning, the sparse network is trained in the standard way. Our method obtains extremely sparse networks with virtually the same accuracy as the reference network on the MNIST, CIFAR-10, and Tiny-ImageNet classification tasks and is broadly applicable to various architectures including convolutional, residual and recurrent networks. Unlike existing methods, our approach enables us to demonstrate that the retained connections are indeed relevant to the given task.) <|cite_end|> and GraSP <|cite_start|> (Reference: Picking Winning Tickets Before Training by Preserving Gradient Flow: Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels.) <|cite_end|> implement pruning in a single step at initialization. In contrast, some approaches attempt a more labour strategy by pruning iteratively before training, allowing for cumulative data analysis. SNIP-it <|cite_start|> (Reference: Pruning via Iterative Ranking of Sensitivity Statistics: With the introduction of SNIP [arXiv:1810.02340v2], it has been demonstrated that modern neural networks can effectively be pruned before training. Yet, its sensitivity criterion has since been criticized for not propagating training signal properly or even disconnecting layers. As a remedy, GraSP [arXiv:2002.07376v1] was introduced, compromising on simplicity. However, in this work we show that by applying the sensitivity criterion iteratively in smaller steps - still before training - we can improve its performance without difficult implementation. As such, we introduce 'SNIP-it'. We then demonstrate how it can be applied for both structured and unstructured pruning, before and/or during training, therewith achieving state-of-the-art sparsity-performance trade-offs. That is, while already providing the computational benefits of pruning in the training process from the start. Furthermore, we evaluate our methods on robustness to overfitting, disconnection and adversarial attacks as well.) <|cite_end|> iteratively tests using a small batch, utilizing feedback from this batch to refine the entire model. Synflow <|cite_start|> (Reference: Pruning neural networks without any data by iteratively conserving synaptic flow: Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time. Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable subnetworks at initialization. This raises a foundational question: can we identify highly sparse trainable subnetworks at initialization, without ever training, or indeed without ever looking at the data? We provide an affirmative answer to this question through theory driven algorithm design. We first mathematically formulate and experimentally verify a conservation law that explains why existing gradient-based pruning algorithms at initialization suffer from layer-collapse, the premature pruning of an entire layer rendering a network untrainable. This theory also elucidates how layer-collapse can be entirely avoided, motivating a novel pruning algorithm Iterative Synaptic Flow Pruning (SynFlow). This algorithm can be interpreted as preserving the total flow of synaptic strengths through the network at initialization subject to a sparsity constraint. Notably, this algorithm makes no reference to the training data and consistently competes with or outperforms existing state-of-the-art pruning algorithms at initialization over a range of models (VGG and ResNet), datasets (CIFAR-10/100 and Tiny ImageNet), and sparsity constraints (up to 99.99 percent). Thus our data-agnostic pruning algorithm challenges the existing paradigm that, at initialization, data must be used to quantify which synapses are important.) <|cite_end|> iterative pruning can prevent layer collapse and proposes data-agnostic criteria that facilitate rapid iterations in predicting parameter scores. While PaI is faster, there is still a performance gap compared to iterative pruning, especially at high levels of sparsity. Our motivation is to minimize the performance discrepancy between PaI and iterative pruning. <|paper_end|> | [
"<|reference_start|> The State of Sparsity in Deep Neural Networks: We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet. Across thousands of experiments, we demonstrate that complex techniques (Molchanov et al., 2017; Louizos et al., 2017b) shown to yield high compression rates on smaller datasets perform inconsistently, and that simple magnitude pruning approaches achieve comparable or better results. Additionally, we replicate the experiments performed by (Frankle & Carbin, 2018) and (Liu et al., 2018) at scale and show that unstructured sparse architectures learned through pruning cannot be trained from scratch to the same test set performance as a model trained with joint sparsification and optimization. Together, these results highlight the need for large-scale benchmarks in the field of model compression. We open-source our code, top performing model checkpoints, and results of all hyperparameter configurations to establish rigorous baselines for future work on compression and sparsification. <|reference_end|>",
"<|reference_start|> Exploring the granularity of sparsity in convolutional neural networks: Sparsity helps reducing the computation complexity of DNNs by skipping the multiplication with zeros. The granularity of sparsity affects the efficiency of hardware architecture and the prediction accuracy. In this paper we quantitatively measure the accuracy-sparsity relationship with different granularity. Coarse-grained sparsity brings more regular sparsity pattern, making it easier for hardware acceleration, and our experimental results show that coarsegrained sparsity have very small impact on the sparsity ratio given no loss of accuracy. Moreover, due to the index saving effect, coarse-grained sparsity is able to obtain similar or even better compression rates than fine-grained sparsity at the same accuracy threshold. Our analysis, which is based on the framework of a recent sparse convolutional neural network (SCNN) accelerator, further demonstrates that it saves 30% – 35% of memory references compared with fine-grained sparsity. <|reference_end|>",
"<|reference_start|> NISP: Pruning Networks using Neuron Importance Score Propagation: To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification, for a pruned network to retrain its predictive power. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, and formulate network pruning as a binary integer optimization problem and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and then fine-tuned to retain its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss. <|reference_end|>",
"<|reference_start|> Picking Winning Tickets Before Training by Preserving Gradient Flow: Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels. <|reference_end|>"
] | [
3,
19,
32,
37
] | {"<|multi_cite_1_1|>": "ss-1117443", "<|multi_cite_1_2|>": "ss-1065219", "<|multi_cite_1_3|>": "arxiv-252302", "<|multi_cite_2_1|>": "arxiv-192853", "<|multi_cite_2_2|>": "arxiv-136506", "<|multi_cite_2_3|>": "ss-700765", "<|cite_3|>": "arxiv-268228", "<|cite_4|>": "arxiv-151068", "<|multi_cite_5_1|>": "arxiv-175116", "<|multi_cite_5_2|>": "arxiv-248834", "<|multi_cite_5_3|>": "arxiv-270655", "<|multi_cite_5_4|>": "ss-740855", "<|cite_6|>": "arxiv-290585", "<|cite_7|>": "arxiv-151068", "<|multi_cite_8_1|>": "arxiv-175116", "<|multi_cite_8_2|>": "arxiv-248834", "<|cite_9|>": "ss-1593564", "<|cite_10|>": "arxiv-175999", "<|multi_cite_11_1|>": "ss-1677214", "<|multi_cite_11_2|>": "ss-2146758", "<|multi_cite_12_1|>": "arxiv-151068", "<|multi_cite_12_2|>": "arxiv-248834", "<|multi_cite_13_1|>": "arxiv-129376", "<|multi_cite_13_2|>": "arxiv-236198", "<|cite_14|>": "arxiv-175116", "<|multi_cite_15_1|>": "arxiv-151068", "<|multi_cite_15_2|>": "arxiv-270655", "<|multi_cite_16_1|>": "arxiv-124717", "<|multi_cite_16_2|>": "arxiv-110533", "<|cite_17|>": "ss-700765", "<|cite_18|>": "ss-1117443", "<|multi_cite_19_1|>": "arxiv-204244", "<|multi_cite_19_2|>": "arxiv-140263", "<|multi_cite_19_3|>": "arxiv-104015", "<|cite_20|>": "arxiv-151068", "<|cite_21|>": "arxiv-397183", "<|cite_22|>": "arxiv-175116", "<|cite_23|>": "arxiv-248834", "<|cite_24|>": "arxiv-270655", "<|cite_25|>": "ss-740855", "<|cite_26|>": "arxiv-151068", "<|cite_27|>": "arxiv-175116", "<|cite_28|>": "arxiv-248834", "<|cite_29|>": "arxiv-268789", "<|cite_30|>": "arxiv-270655"} |
2104.11033 | <|paper_start|> Title: Nonlinear Spatial Filtering in Multichannel Speech Enhancement
Abstract: Nonlinear Spatial Filtering in Multichannel Speech Enhancement: The majority of multichannel speech enhancement algorithms are two-step procedures that first apply a linear spatial filter, a so-called beamformer, and combine it with a single-channel approach for postprocessing. However, the serial concatenation of a linear spatial filter and a postfilter is not generally optimal in the minimum mean square error (MMSE) sense for noise distributions other than a Gaussian distribution. Rather, the MMSE optimal filter is a joint spatial and spectral nonlinear function. While estimating the parameters of such a filter with traditional methods is challenging, modern neural networks may provide an efficient way to learn the nonlinear function directly from data. To see if further research in this direction is worthwhile, in this work we examine the potential performance benefit of replacing the common two-step procedure with a joint spatial and spectral nonlinear filter. We analyze three different forms of non-Gaussianity: First, we evaluate on super-Gaussian noise with a high kurtosis. Second, we evaluate on inhomogeneous noise fields created by five interfering sources using two microphones, and third, we evaluate on real-world recordings from the CHiME3 database. In all scenarios, considerable improvements may be obtained. Most prominently, our analyses show that a nonlinear spatial filter uses the available spatial information more effectively than a linear spatial filter as it is capable of suppressing more than $D-1$ directional interfering sources with a $D$-dimensional microphone array without spatial adaptation.
Introduction
\IEEEPARstart{I}{n} our everyday life, we are surrounded by background noise for example traffic noise or competing speakers. Hence, speech signals that are recorded in real environments are often corrupted by noise. Speech enhancement algorithms are employed to recover the target signal from a noisy recording. This is done by suppressing the background noise or reducing other unwanted effects such as reverberation. This way, speech enhancement algorithms aim to improve speech quality and intelligibility. Their fields of application are manifold and range from assisted listening devices to telecommunication all the way to \ac{ASR} front-ends <|cite_start|> (Reference: Multichannel Signal Enhancement Algorithms for Assisted Listening Devices: Exploiting spatial diversity using multiple microphones: In everyday environments, we are frequently immersed by unwanted acoustic noise and interference while we want to listen to acoustic signals, most often speech. Technology for assisted listening is then desired to increase the efficiency of speech communication, reduce listener fatigue, or just allow for enjoying undisturbed sounds (e.g., music). For people with normal hearing, assisted listening devices (ALDs) mainly aim to achieve hearing protection or increase listening comfort; however, for hearing-impaired individuals, as the most prominent user group so far, further progress of assisted listening technology is crucial for better inclusion into our world of pervasive acoustic communication.) <|cite_end|> <|cite_start|> (Reference: Particle flow SMC-PHD filter for audio-visualmulti-speaker tracking. Proc. 13th International Conference on Latent Variable Analysis and Signal Separation(LVA/ICA 2017), Grenoble, France, February 21-23, 2017.: Sequential Monte Carlo probability hypothesis density (SMC-
PHD) filtering has been recently exploited for audio-visual (AV) based
tracking of multiple speakers, where audio data are used to inform the
particle distribution and propagation in the visual SMC-PHD filter. However, the performance of the AV-SMC-PHD filter can be affected by the
mismatch between the proposal and the posterior distribution. In this paper, we present a new method to improve the particle distribution where
audio information (i.e. DOA angles derived from microphone array measurements) is used to detect new born particles and visual information
(i.e. histograms) is used to modify the particles with particle
flow (PF).
Using particle
flow has the benefit of migrating particles smoothly from
the prior to the posterior distribution. We compare the proposed algorithm with the baseline AV-SMC-PHD algorithm using experiments on
the AV16.3 dataset with multi-speaker sequences.) <|cite_end|>.
If the noisy speech signal is captured by a microphone array instead of just a single microphone, then not only tempo-spectral properties can be used to extract the target signal but also spatial information. Spatial filtering aims at suppressing signal components from other than the target direction. The filter-and-sum beamforming approach \cite[Sec. 12.4.2]{vary2006digital} achieves this by filtering the individual microphone signals and adding them. In the frequency domain, this means to compute the scalar product between a complex weight vector and the vector of spectral representations of the multichannel noisy signal. Hence, the beamforming operation is linear with respect to the noisy input.
The beamforming weights are chosen to optimize some performance measure. For example, minimizing the noise variance subject to a distortionless constraint leads to the well-known \ac{MVDR} beamformer \cite[Sec. 3.6]{benesty2008MicrophoneArraySignal}. The noise suppression capability of such a spatial filter alone is often not sufficient and a single-channel filter is applied to the output of the spatial filter to improve the speech enhancement performance. The second processing stage in this two-step processing scheme is often referred to as the postfiltering step.
Single-channel speech enhancement has a long research history that has led to a variety of solutions like the classic single-channel Wiener filter \cite[Sec 11.4]{vary2006digital} or other estimators derived in a statistical framework <|cite_start|> (Reference: {Speech Enhancement Using a Minimum-Mean Square Error Short-Time Spectral Amplitude Estimator: Absstroct-This paper focuses on the class of speech enhancement systems which capitalize on the major importance of the short-time spectral amplitude (STSA) of the speech signal in its perception. A system which utilizes a minimum mean-square error (MMSE) STSA estimator is proposed and then compared with other widely used systems which are based on Wiener filtering and the " spectral subtraction " algorithm. In this paper we derive the MMSE STSA estimator, based on modeling speech and noise spectral components as statistically independent Gaussian random variables. We analyze the performance of the proposed STSA estimator and compare it with a STSA estimator derived from the Wiener estimator. We also examine the MMSE STSA estimator under uncertainty of signal presence in the noisy observations. In constructing the enhanced signal, the MMSE STSA estimator is combined with the complex exponential of the noisy phase. It is shown here that the latter is the MMSE estimator of the complex exponential of the original phase, which does not affect the STSA estimation. The proposed approach results in a significant reduction of the noise, and provides enhanced speech with colorless residual noise. The complexity of the proposed algorithm is approximately that of other systems in the discussed class.) <|cite_end|> <|cite_start|> (Reference: Speech {{Enhancement: There has been considerable recent interest on the problem of enhancing degraded speech. This interest is motivated by several factors including a broad set of important applications and the apparent lack of robustness in recent speech compression and recognition systems. One objective of this paper is to provide an overview of various techniques that have been proposed for enhancement of speech. Another objective is to suggest some directions for future research in the speech enhancement problem.) <|cite_end|> <|cite_start|> (Reference: Supercritical minimum mean-weight cycles: We study the weight and length of the minimum mean-weight cycle in the stochastic mean-field distance model, i.e., in the complete graph on $n$ vertices with edges weighted by independent exponential random variables. Mathieu and Wilson showed that the minimum mean-weight cycle exhibits one of two distinct behaviors, according to whether its mean weight is smaller or larger than $1/(ne)$; and that both scenarios occur with positive probability in the limit $n\to\infty$. If the mean weight is $ 1/(ne)$, it is concentrated just above $1/(n e)$, and the length diverges with $n$. The analysis of Mathieu--Wilson gives a detailed characterization of the subcritical regime, including the (non-degenerate) limiting distributions of the weight and length, but leaves open the supercritical behavior. We determine the asymptotics for the supercritical regime, showing that with high probability, the minimum mean weight is $(n e)^{-1}[1 + \pi^2/(2 \log^2 n) + O((\log n)^{-3})]$, and the cycle achieving this minimum has length on the order of $(\log n)^3$.) <|cite_end|>. Many recent advances in single-channel speech enhancement are driven by the modeling capabilities of \acp{DNN} <|cite_start|> (Reference: {A Regression Approach to Speech Enhancement Based on Deep Neural Networks: In contrast to the conventional minimum mean square error (MMSE)-based noise reduction techniques, we propose a supervised method to enhance speech by means of finding a mapping function between noisy and clean speech signals based on deep neural networks (DNNs). In order to be able to handle a wide range of additive noises in real-world situations, a large training set that encompasses many possible combinations of speech and noise types, is first designed. A DNN architecture is then employed as a nonlinear regression function to ensure a powerful modeling capability. Several techniques have also been proposed to improve the DNN-based speech enhancement system, including global variance equalization to alleviate the over-smoothing problem of the regression model, and the dropout and noise-aware training strategies to further improve the generalization capability of DNNs to unseen noise conditions. Experimental results demonstrate that the proposed framework can achieve significant improvements in both objective and subjective measures over the conventional MMSE based technique. It is also interesting to observe that the proposed DNN approach can well suppress highly nonstationary noise, which is tough to handle in general. Furthermore, the resulting DNN model, trained with artificial synthesized data, is also effective in dealing with noisy speech data recorded in real-world scenarios without the generation of the annoying musical artifact commonly observed in conventional enhancement methods.) <|cite_end|> <|cite_start|> (Reference: A Wavenet for Speech Denoising: Most speech processing techniques use magnitude spectrograms as front-end and are therefore by default discarding part of the signal: the phase. In order to overcome this limitation’ we propose an end-to-end learning method for speech denoising based on Wavenet. The proposed model adaptation retains Wavenet's powerful acoustic modeling capabilities, while significantly reducing its time-complexity by eliminating its autoregressive nature. Specifically, the model makes use of non-causal, dilated convolutions and predicts target fields instead of a single target sample. The discriminative adaptation of the model we propose, learns in a supervised fashion via minimizing a regression loss. These modifications make the model highly parallelizable during both training and inference. Both quantitative and qualitative evaluations indicate that the proposed method is preferred over Wiener filtering, a common method based on processing the magnitude spectrogram.) <|cite_end|> <|cite_start|> (Reference: A Fully Convolutional Neural Network for Speech Enhancement: In hearing aids, the presence of babble noise degrades hearing intelligibility of human speech greatly. However, removing the babble without creating artifacts in human speech is a challenging task in a low SNR environment. Here, we sought to solve the problem by finding a `mapping' between noisy speech spectra and clean speech spectra via supervised learning. Specifically, we propose using fully Convolutional Neural Networks, which consist of lesser number of parameters than fully connected networks. The proposed network, Redundant Convolutional Encoder Decoder (R-CED), demonstrates that a convolutional network can be 12 times smaller than a recurrent network and yet achieves better performance, which shows its applicability for an embedded system: the hearing aids.) <|cite_end|> <|cite_start|> (Reference: An epileptic seizures diagnosis system using feature selection, fuzzy temporal naive Bayes and T-CNN: ) <|cite_end|>.
\begin{figure}
\begin{minipage}[]{0.1\linewidth}
\subcaption{}\label{fig:1-separated}
\end{minipage}
\begin{adjustbox}{minipage=0.85\linewidth}
\centering
\includegraphics{compiledfigures/main-figure0.pdf}
\end{adjustbox}
\begin{minipage}[]{0.1\linewidth}
\subcaption{}\label{fig:1-jointfilter}
\end{minipage}
\begin{adjustbox}{minipage=0.85\linewidth}
\centering
\includegraphics{compiledfigures/main-figure1.pdf}
\end{adjustbox}
\caption{(a) Illustration of the commonly employed two-step processing using a linear spatial filter (beamformer) followed by a single-channel postfilter. (b) Illustration of the nonlinear spatial filter investigated in this paper, which joins the spatial and spectral processing into a non-separable nonlinear operation.}
\label{fig:1-comparison}
\end{figure}
It seems convenient to independently develop a spatial filter and a postfilter and combine them into a two-step procedure afterward as shown in Figure \ref{fig:1-separated}. If the noise follows a Gaussian distribution, this approach can even be regarded as optimal in the \ac{MMSE} sense as Balan and Rosca <|cite_start|> (Reference: Microphone array speech enhancement based on optimized IMCRA: Microphone array speech enhancement algorithm uses temporal and spatial informa- tion to improve the performance of speech noise reduction significantly. By combining noise estimation algorithm with microphone array speech enhancement, the accuracy of noise estimation is improved, and
the computation is reduced. In traditional noise es- timation algorithms, the noise power spectrum is not updated in the presence of speech, which leads to the delay and deviation of noise spectrum estimation. An optimized im- proved minimum controlled recursion average speech enhancement
algorithm, based on a microphone matrix is proposed in this paper. It consists of three parts. The first part is the preprocessing, divided into two branches: the upper branch enhances the speech signal, and the lower branch gets the noise. The second part is the optimized improved minimum
controlled recursive averaging. The noise power spectrum is updated not only in the non-speech segments but also in the speech segments. Fi- nally, according to the estimated noise power spectrum, the minimum mean-square error log-spectral amplitude algorithm is used to enhance speech. Testing
data are from TIMIT and Noisex-92 databases. Short-time objective intelligibility and seg- mental signal-to-noise ratio are chosen as evaluation metrics. Experimental results show that the proposed speech enhancement algorithm can improve the segmental signal-to-noise ratio and short-time
objective intelligibility for various noise types at different signal-to-noise ratio levels.) <|cite_end|> have shown that the \ac{MMSE} solution can always be separated into the linear \ac{MVDR} beamformer and a postfilter. However, this separability into a linear spatial filter and a postfilter only holds under the restrictive assumption that the noise is Gaussian distributed. The work of Hendriks et al. <|cite_start|> (Reference: On Optimal Multichannel Mean-Squared Error Estimators for Speech Enhancement: In this letter we present discrete Fourier transform (DFT) domain minimum mean-squared error (MMSE) estimators for multichannel noise reduction. The estimators are derived assuming that the clean speech magnitude DFT coefficients are generalized-Gamma distributed. We show that for Gaussian distributed noise DFT coefficients, the optimal filtering approach consists of a concatenation of a minimum variance distortionless response (MVDR) beamformer followed by well-known single-channel MMSE estimators. The multichannel Wiener filter follows as a special case of the presented MSE estimators and is in general suboptimal. For non-Gaussian distributed noise DFT coefficients the resulting spatial filter is in general nonlinear with respect to the noisy microphone signals and cannot be decomposed into an MVDR beamformer and a post-filter.) <|cite_end|> points out that the \ac{MMSE} optimal solution for non-Gaussian noise joins the spatial and spectral processing into a single nonlinear operation. Throughout this work, we call such an approach a \emph{nonlinear spatial filter} for brevity even though spectral processing steps are also included. An illustration is given in Figure \ref{fig:1-jointfilter}.
The result of Hendriks et al. reveals that the common two-step multichannel processing scheme cannot be considered optimal for more general noise distributions than a Gaussian distribution. This leads to the question if we should invest in the development of nonlinear spatial filters for example using \acp{DNN}. Today, single-channel approaches often use the possibilities of \acp{DNN} to learn complex nonlinear estimators directly from data. In contrast, the field of multichannel speech enhancement is dominated by approaches that use \acp{DNN} only for parameter estimation of a beamformer <|cite_start|> (Reference: Research on multi-resolution modeling and simulation of radar signal processing system: Multi-Resolution modeling techniques have been used in various fields, but the application in radar system simulation is still in the exploratory stage. In this paper, the multiresolution modeling techniques used in radar system simulation, the hierarchical, modular modeling system, Set up corresponding to different requirements of different resolution model library with the multi-resolution modeling techniques. Put forward an improved radar functional simulation system that is aggregation from signal-level simulation, and gives the corresponding simulation model, through the model aggregate can realize the switching from signal-level simulation to functional simulation. Finally, the simulation proves the consistency between the two resolution models.) <|cite_end|> <|cite_start|> (Reference: Research on multi-resolution modeling and simulation of radar signal processing system: Multi-Resolution modeling techniques have been used in various fields, but the application in radar system simulation is still in the exploratory stage. In this paper, the multiresolution modeling techniques used in radar system simulation, the hierarchical, modular modeling system, Set up corresponding to different requirements of different resolution model library with the multi-resolution modeling techniques. Put forward an improved radar functional simulation system that is aggregation from signal-level simulation, and gives the corresponding simulation model, through the model aggregate can realize the switching from signal-level simulation to functional simulation. Finally, the simulation proves the consistency between the two resolution models.) <|cite_end|> or restrict the network architecture in a way that a linear spatial processing model is preserved <|cite_start|> (Reference: Multichannel signal processing with deep neural networks for automatic speech recognition: Multichannel automatic speech recognition (ASR) systems commonly separate speech enhancement, including localization, beamforming, and postfiltering, from acoustic modeling. In this paper, we perform multichannel enhancement jointly with acoustic modeling in a deep neural network framework. Inspired by beamforming, which leverages differences in the fine time structure of the signal at different microphones to filter energy arriving from different directions, we explore modeling the raw time-domain waveform directly. We introduce a neural network architecture, which performs multichannel filtering in the first layer of the network, and show that this network learns to be robust to varying target speaker direction of arrival, performing as well as a model that is given oracle knowledge of the true target speaker direction. Next, we show how performance can be improved by factoring the first layer to separate the multichannel spatial filtering operation from a single channel filterbank which computes a frequency decomposition. We also introduce an adaptive variant, which updates the spatial filter coefficients at each time frame based on the previous inputs. Finally, we demonstrate that these approaches can be implemented more efficiently in the frequency domain. Overall, we find that such multichannel neural networks give a relative word error rate improvement of more than 5% compared to a traditional beamforming-based multichannel ASR system and more than 10% compared to a single channel waveform model.) <|cite_end|>. Only a few approaches with and without \acp{DNN} <|cite_start|> (Reference: End-To-End Multi-Task Learning With Attention: We propose a novel multi-task learning architecture, which allows learning of task-specific feature-level attention. Our design, the Multi-Task Attention Network (MTAN), consists of a single shared network containing a global feature pool, together with a soft-attention module for each task. These modules allow for learning of task-specific features from the global features, whilst simultaneously allowing for features to be shared across different tasks. The architecture can be trained end-to-end and can be built upon any feed-forward neural network, is simple to implement, and is parameter efficient. We evaluate our approach on a variety of datasets, across both image-to-image predictions and image classification tasks. We show that our architecture is state-of-the-art in multi-task learning compared to existing methods, and is also less sensitive to various weighting schemes in the multi-task loss function. Code is available at https://github.com/lorenmt/mtan.) <|cite_end|> <|cite_start|> (Reference: Channel-Attention Dense U-Net for Multichannel Speech Enhancement: Supervised deep learning has gained significant attention for speech enhancement recently. The state-of-the-art deep learning methods perform the task by learning a ratio/binary mask that is applied to the mixture in the time-frequency domain to produce the clean speech. Despite the great performance in the single-channel setting, these frameworks lag in performance in the multichannel setting as the majority of these methods a) fail to exploit the available spatial information fully, and b) still treat the deep architecture as a black box which may not be well-suited for multichannel audio processing. This paper addresses these drawbacks, a) by utilizing complex ratio masking instead of masking on the magnitude of the spectrogram, and more importantly, b) by introducing a channel-attention mechanism inside the deep architecture to mimic beamforming. We propose Channel-Attention Dense U-Net, in which we apply the channel-attention unit recursively on feature maps at every layer of the network, enabling the network to perform non-linear beamforming. We demonstrate the superior performance of the network against the state-of-the-art approaches on the CHiME-3 dataset.) <|cite_end|> <|cite_start|> (Reference: Nonlinear Kronecker product filtering for multichannel noise reduction: ) <|cite_end|> have been proposed that extend the spatial processing model to be nonlinear. Still, the questions of how much we can possibly gain by doing this, in which situations, and also where the benefit of using a nonlinear spatial filter comes from have not been addressed adequately. These are the questions that we aim to investigate in this paper.
This work is based on a previous conference publication <|cite_start|> (Reference: On Nonlinear Spatial Filtering in Multichannel Speech Enhancement: ) <|cite_end|>. In <|cite_start|> (Reference: Nonlinear spatial filtering for multichannel speech enhancement in inhomogeneous noise fields: A common processing pipeline for multichannel speech enhancement is to combine a linear spatial filter with a single-channel postfilter. In fact, it can be shown that such a combination is optimal in the minimum mean square error (MMSE) sense if the noise follows a multivariate Gaussian distribution. However, for non-Gaussian noise, this serial concatenation is generally suboptimal and may thus also lead to suboptimal results. For instance, in our previous work, we showed that a joint spatial-spectral nonlinear estimator achieves a performance gain of 2.6 dB segmental signal-to-noise ratio (SNR) improvement for heavy-tailed large-kurtosis multivariate noise compared to the traditional combination of a linear spatial beamformer and a postfilter.In this paper, we show that a joint spatial-spectral nonlinear filter is not only advantageous for noise distributions that are significantly more heavy-tailed than a Gaussian but also for distributions that model inhomogeneous noise fields while having rather low kurtosis. In experiments with artificially created noise we measure a gain of 1 dB for inhomogenous noise with low kurtosis and up to 2 dB for inhomogeneous noise fields with moderate kurtosis.) <|cite_end|> we have studied related aspects of these questions. Here, we extend our previous work by more detailed derivations and new analyses that provide some insight into the functioning of the nonlinear spatial filter. In Section \ref{sec:3-theory}, we provide a detailed overview of the theoretical results from a statistical perspective. We include the previously outlined results and also provide a new simplified proof for the finding of Balan and Rosca in <|cite_start|> (Reference: Microphone array speech enhancement based on optimized IMCRA: Microphone array speech enhancement algorithm uses temporal and spatial informa- tion to improve the performance of speech noise reduction significantly. By combining noise estimation algorithm with microphone array speech enhancement, the accuracy of noise estimation is improved, and
the computation is reduced. In traditional noise es- timation algorithms, the noise power spectrum is not updated in the presence of speech, which leads to the delay and deviation of noise spectrum estimation. An optimized im- proved minimum controlled recursion average speech enhancement
algorithm, based on a microphone matrix is proposed in this paper. It consists of three parts. The first part is the preprocessing, divided into two branches: the upper branch enhances the speech signal, and the lower branch gets the noise. The second part is the optimized improved minimum
controlled recursive averaging. The noise power spectrum is updated not only in the non-speech segments but also in the speech segments. Fi- nally, according to the estimated noise power spectrum, the minimum mean-square error log-spectral amplitude algorithm is used to enhance speech. Testing
data are from TIMIT and Noisex-92 databases. Short-time objective intelligibility and seg- mental signal-to-noise ratio are chosen as evaluation metrics. Experimental results show that the proposed speech enhancement algorithm can improve the segmental signal-to-noise ratio and short-time
objective intelligibility for various noise types at different signal-to-noise ratio levels.) <|cite_end|>. We then evaluate the performance benefit of a nonlinear spatial filter for heavy-tailed noise in Section \ref{sec:4-a-heavy-tailed}, for an inhomogeneous noise field created by five interfering human speakers in Section \ref{sec:4-b-inh-noise-speakers}, and real-world noise recordings in Section \ref{sec:4-d-chime}. In Section \ref{sec:5-interpretation}, we investigate the improved exploitation of spatial information by the nonlinear spatial filter and discuss practical issues of the used analytic nonlinear spatial filter.
Even though nonlinear spatial filters would most likely be implemented using \acp{DNN} in the future, in our analyses we rely on statistical \ac{MMSE} estimators to provide more general insights than by using \ac{DNN}-based nonlinear spatial filters which would be highly dependent on the network architecture and training data. <|paper_end|> | [
"<|reference_start|> Particle flow SMC-PHD filter for audio-visualmulti-speaker tracking. Proc. 13th International Conference on Latent Variable Analysis and Signal Separation(LVA/ICA 2017), Grenoble, France, February 21-23, 2017.: Sequential Monte Carlo probability hypothesis density (SMC- \nPHD) filtering has been recently exploited for audio-visual (AV) based \ntracking of multiple speakers, where audio data are used to inform the \nparticle distribution and propagation in the visual SMC-PHD filter. However, the performance of the AV-SMC-PHD filter can be affected by the \nmismatch between the proposal and the posterior distribution. In this paper, we present a new method to improve the particle distribution where \naudio information (i.e. DOA angles derived from microphone array measurements) is used to detect new born particles and visual information \n(i.e. histograms) is used to modify the particles with particle \nflow (PF). \nUsing particle \nflow has the benefit of migrating particles smoothly from \nthe prior to the posterior distribution. We compare the proposed algorithm with the baseline AV-SMC-PHD algorithm using experiments on \nthe AV16.3 dataset with multi-speaker sequences. <|reference_end|>",
"<|reference_start|> An epileptic seizures diagnosis system using feature selection, fuzzy temporal naive Bayes and T-CNN: <|reference_end|>",
"<|reference_start|> Multichannel signal processing with deep neural networks for automatic speech recognition: Multichannel automatic speech recognition (ASR) systems commonly separate speech enhancement, including localization, beamforming, and postfiltering, from acoustic modeling. In this paper, we perform multichannel enhancement jointly with acoustic modeling in a deep neural network framework. Inspired by beamforming, which leverages differences in the fine time structure of the signal at different microphones to filter energy arriving from different directions, we explore modeling the raw time-domain waveform directly. We introduce a neural network architecture, which performs multichannel filtering in the first layer of the network, and show that this network learns to be robust to varying target speaker direction of arrival, performing as well as a model that is given oracle knowledge of the true target speaker direction. Next, we show how performance can be improved by factoring the first layer to separate the multichannel spatial filtering operation from a single channel filterbank which computes a frequency decomposition. We also introduce an adaptive variant, which updates the spatial filter coefficients at each time frame based on the previous inputs. Finally, we demonstrate that these approaches can be implemented more efficiently in the frequency domain. Overall, we find that such multichannel neural networks give a relative word error rate improvement of more than 5% compared to a traditional beamforming-based multichannel ASR system and more than 10% compared to a single channel waveform model. <|reference_end|>",
"<|reference_start|> Microphone array speech enhancement based on optimized IMCRA: Microphone array speech enhancement algorithm uses temporal and spatial informa- tion to improve the performance of speech noise reduction significantly. By combining noise estimation algorithm with microphone array speech enhancement, the accuracy of noise estimation is improved, and\n the computation is reduced. In traditional noise es- timation algorithms, the noise power spectrum is not updated in the presence of speech, which leads to the delay and deviation of noise spectrum estimation. An optimized im- proved minimum controlled recursion average speech enhancement\n algorithm, based on a microphone matrix is proposed in this paper. It consists of three parts. The first part is the preprocessing, divided into two branches: the upper branch enhances the speech signal, and the lower branch gets the noise. The second part is the optimized improved minimum\n controlled recursive averaging. The noise power spectrum is updated not only in the non-speech segments but also in the speech segments. Fi- nally, according to the estimated noise power spectrum, the minimum mean-square error log-spectral amplitude algorithm is used to enhance speech. Testing\n data are from TIMIT and Noisex-92 databases. Short-time objective intelligibility and seg- mental signal-to-noise ratio are chosen as evaluation metrics. Experimental results show that the proposed speech enhancement algorithm can improve the segmental signal-to-noise ratio and short-time\n objective intelligibility for various noise types at different signal-to-noise ratio levels. <|reference_end|>"
] | [
1,
8,
13,
19
] | {"<|multi_cite_1_1|>": "ss-2136231", "<|multi_cite_1_2|>": "ss-2136232", "<|multi_cite_2_1|>": "ss-1041148", "<|multi_cite_2_2|>": "ss-1351738", "<|multi_cite_2_3|>": "ss-2136233", "<|multi_cite_3_1|>": "ss-930669", "<|multi_cite_3_2|>": "ss-1252843", "<|multi_cite_3_3|>": "ss-2136234", "<|multi_cite_3_4|>": "ss-2133571", "<|cite_4|>": "ss-2136235", "<|cite_5|>": "ss-2136236", "<|multi_cite_6_1|>": "ss-686604", "<|multi_cite_6_2|>": "ss-686604", "<|cite_7|>": "ss-1520790", "<|multi_cite_8_1|>": "ss-1389725", "<|multi_cite_8_2|>": "ss-2136237", "<|multi_cite_8_3|>": "ss-2136238", "<|cite_9|>": "ss-2136239", "<|cite_10|>": "ss-2136240", "<|cite_11|>": "ss-2136235"} |
2303.13220 | <|paper_start|> Title: Parameter-Efficient Sparse Retrievers and Rerankers using Adapters
Abstract: Parameter-Efficient Sparse Retrievers and Rerankers using Adapters: Parameter-Efficient transfer learning with Adapters have been studied in Natural Language Processing (NLP) as an alternative to full fine-tuning. Adapters are memory-efficient and scale well with downstream tasks by training small bottle-neck layers added between transformer layers while keeping the large pretrained language model (PLMs) frozen. In spite of showing promising results in NLP, these methods are under-explored in Information Retrieval. While previous studies have only experimented with dense retriever or in a cross lingual retrieval scenario, in this paper we aim to complete the picture on the use of adapters in IR. First, we study adapters for SPLADE, a sparse retriever, for which adapters not only retain the efficiency and effectiveness otherwise achieved by finetuning, but are memory-efficient and orders of magnitude lighter to train. We observe that Adapters-SPLADE not only optimizes just 2\% of training parameters, but outperforms fully fine-tuned counterpart and existing parameter-efficient dense IR models on IR benchmark datasets. Secondly, we address domain adaptation of neural retrieval thanks to adapters on cross-domain BEIR datasets and TripClick. Finally, we also consider knowledge sharing between rerankers and first stage rankers. Overall, our study complete the examination of adapters for neural IR
Introduction
Information Retrieval (IR) systems often aim to return a ranked list of documents ordered with respect to their relevance to a user query.
In modern web search engines, there is, in fact, not a single retrieval model but several ones specialized in diverse information needs such as different search verticals.
To add to this complexity, multi-stage retrieval considers effectiveness-efficiency trade-off where first stage retrievers are essential for fast retrieval of potentially relevant candidate documents from a large corpus. Further down the pipeline, rerankers are added focusing on effectiveness.
With the advent of large Pretrained Language Models (PLM), recent neural retrieval models have millions of parameters. Training, updating and adapting such models implies significant computing and storage cost calling for efficient methods. Moreover, generalizability across out-of-domain datasets is critical and even when effectively adapted to new domains, full finetuning often comes at the expense of large storage and catastrophic forgetting. Fortunately, such research questions have already been studied in the NLP literature <|cite_start|> (Reference: AdapterHub Playground: Simple and Flexible Few-Shot Learning with Adapters: The open-access dissemination of pretrained language models through online repositories has led to a democratization of state-of-the-art natural language processing (NLP) research. This also allows people outside of NLP to use such models and adapt them to specific use-cases. However, a certain amount of technical proficiency is still required which is an entry barrier for users who want to apply these models to a certain task but lack the necessary knowledge or resources. In this work, we aim to overcome this gap by providing a tool which allows researchers to leverage pretrained models without writing a single line of code. Built upon the parameter-efficient adapter modules for transfer learning, our AdapterHub Playground provides an intuitive interface, allowing the usage of adapters for prediction, training and analysis of textual data for a variety of NLP tasks. We present the tool's architecture and demonstrate its advantages with prototypical use-cases, where we show that predictive performance can easily be increased in a few-shot learning scenario. Finally, we evaluate its usability in a user study. We provide the code and a live interface at https://adapter-hub.github.io/playground.) <|cite_end|> <|cite_start|> (Reference: “{B: 2010年1月22日,享有世界性声誉的英国《皇家学会会志B辑》以专辑的形式发表了基于中国化石材料的20篇古生物学论文。这是该刊首次出版的古生物学专辑。这些论文代表近年来飞速发展的中国古生物学取得的巨大成就的一部分。) <|cite_end|> <|cite_start|> (Reference: LoRA: Low-Rank Adaptation of Large Language Models: An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.) <|cite_end|> with parameter-efficient tuning. In spite of very recent work exploring parameter-efficient techniques for neural retrieval, the use of adapters in IR has been overlooked. Previous work on dense retriever had mixed results <|cite_start|> (Reference: Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight Fine-Tuning: A BERT-based Neural Ranking Model (NRM) can be either a crossencoder or a bi-encoder. Between the two, bi-encoder is highly efficient because all the documents can be pre-processed before the actual query time. In this work, we show two approaches for improving the performance of BERT-based bi-encoders. The first approach is to replace the full fine-tuning step with a lightweight fine-tuning. We examine lightweight fine-tuning methods that are adapter-based, prompt-based, and hybrid of the two. The second approach is to develop semi-Siamese models where queries and documents are handled with a limited amount of difference. The limited difference is realized by learning two lightweight fine-tuning modules, where the main language model of BERT is kept common for both query and document. We provide extensive experiment results for monoBERT, TwinBERT, and ColBERT where three performance metrics are evaluated over Robust04, ClueWeb09b, and MS-MARCO datasets. The results confirm that both lightweight fine-tuning and semi-Siamese are considerably helpful for improving BERT-based bi-encoders. In fact, lightweight fine-tuning is helpful for crossencoder, too) <|cite_end|> and successful adaptation was achieved for cross lingual retrieval <|cite_start|> (Reference: Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval: State-of-the-art neural (re)rankers are notoriously data-hungry which -- given the lack of large-scale training data in languages other than English -- makes them rarely used in multilingual and cross-lingual retrieval settings. Current approaches therefore commonly transfer rankers trained on English data to other languages and cross-lingual setups by means of multilingual encoders: they fine-tune all parameters of pretrained massively multilingual Transformers (MMTs, e.g., multilingual BERT) on English relevance judgments, and then deploy them in the target language(s). In this work, we show that two parameter-efficient approaches to cross-lingual transfer, namely Sparse Fine-Tuning Masks (SFTMs) and Adapters, allow for a more lightweight and more effective zero-shot transfer to multilingual and cross-lingual retrieval tasks. We first train language adapters (or SFTMs) via Masked Language Modelling and then train retrieval (i.e., reranking) adapters (SFTMs) on top, while keeping all other parameters fixed. At inference, this modular design allows us to compose the ranker by applying the (re)ranking adapter (or SFTM) trained with source language data together with the language adapter (or SFTM) of a target language. We carry out a large scale evaluation on the CLEF-2003 and HC4 benchmarks and additionally, as another contribution, extend the former with queries in three new languages: Kyrgyz, Uyghur and Turkish. The proposed parameter-efficient methods outperform standard zero-shot transfer with full MMT fine-tuning, while being more modular and reducing training times. The gains are particularly pronounced for low-resource languages, where our approaches also substantially outperform the competitive machine translation-based rankers.) <|cite_end|>. Our study aims to complete the examination of adapters for neural IR and investigates it with neural sparse retrievers. We study ablation of adapter layers to analyze whether all layers contribute equally. We examine how adapter-tuned neural sparse retriever SPLADE <|cite_start|> (Reference: From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective: Neural retrievers based on dense representations combined with Approximate Nearest Neighbors search have recently received a lot of attention, owing their success to distillation and/or better sampling of examples for training -- while still relying on the same backbone architecture. In the meantime, sparse representation learning fueled by traditional inverted indexing techniques has seen a growing interest, inheriting from desirable IR priors such as explicit lexical matching. While some architectural variants have been proposed, a lesser effort has been put in the training of such models. In this work, we build on SPLADE -- a sparse expansion-based retriever -- and show to which extent it is able to benefit from the same training improvements as dense models, by studying the effect of distillation, hard-negative mining as well as the Pre-trained Language Model initialization. We furthermore study the link between effectiveness and efficiency, on in-domain and zero-shot settings, leading to state-of-the-art results in both scenarios for sufficiently expressive models.) <|cite_end|> fares on benchmark IR datasets MS MARCO <|cite_start|> (Reference: MS MARCO: A Human Generated MAchine Reading COmprehension Dataset: We introduce a large scale MAchine Reading COmprehension dataset, which we name MS MARCO. The dataset comprises of 1,010,916 anonymized questions---sampled from Bing's search query logs---each with a human generated answer and 182,669 completely human rewritten generated answers. In addition, the dataset contains 8,841,823 passages---extracted from 3,563,535 web documents retrieved by Bing---that provide the information necessary for curating the natural language answers. A question in the MS MARCO dataset may have multiple answers or no answers at all. Using this dataset, we propose three different tasks with varying levels of difficulty: (i) predict if a question is answerable given a set of context passages, and extract and synthesize the answer as a human would (ii) generate a well-formed answer (if possible) based on the context passages that can be understood with the question and passage context, and finally (iii) rank a set of retrieved passages given a question. The size of the dataset and the fact that the questions are derived from real user search queries distinguishes MS MARCO from other well-known publicly available datasets for machine reading comprehension and question-answering. We believe that the scale and the real-world nature of this dataset makes it attractive for benchmarking machine reading comprehension and question-answering models.) <|cite_end|>, TREC DL 2019 and 2020 <|cite_start|> (Reference: TREC Deep Learning Track: Reusable Test Collections in the Large Data Regime: The TREC Deep Learning (DL) Track studies ad hoc search in the large data regime, meaning that a large set of human-labeled training data is available. Results so far indicate that the best models with large data may be deep neural networks. This paper supports the reuse of the TREC DL test collections in three ways. First we describe the data sets in detail, documenting clearly and in one place some details that are otherwise scattered in track guidelines, overview papers and in our associated MS MARCO leaderboard pages. We intend this description to make it easy for newcomers to use the TREC DL data. Second, because there is some risk of iteration and selection bias when reusing a data set, we describe the best practices for writing a paper using TREC DL data, without overfitting. We provide some illustrative analysis. Finally we address a number of issues around the TREC DL data, including an analysis of reusability.) <|cite_end|> and out-of-domain BEIR datasets <|cite_start|> (Reference: BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models: Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities. We hope this framework allows us to better evaluate and understand existing retrieval systems, and contributes to accelerating progress towards better robust and generalizable systems in the future. BEIR is publicly available at https://github.com/UKPLab/beir.) <|cite_end|>. We explore whether generalizability of SPLADE can be further improved with adapter-tuning on BEIR and out-of-domain dataset such as TripClick <|cite_start|> (Reference: TripClick: The Log Files of a Large Health Web Search Engine: Click logs are valuable resources for a variety of information retrieval (IR) tasks. This includes query understanding/analysis, as well as learning effective IR models particularly when the models require large amounts of training data. We release a large-scale domain-specific dataset of click logs, obtained from user interactions of the Trip Database health web search engine. Our click log dataset comprises approximately 5.2 million user interactions collected between 2013 and 2020. We use this dataset to create a standard IR evaluation benchmark -- TripClick -- with around 700,000 unique free-text queries and 1.3 million pairs of query-document relevance signals, whose relevance is estimated by two click-through models. As such, the collection is one of the few datasets offering the necessary data richness and scale to train neural IR models with a large amount of parameters, and notably the first in the health domain. Using TripClick, we conduct experiments to evaluate a variety of IR models, showing the benefits of exploiting this data to train neural architectures. In particular, the evaluation results show that the best performing neural IR model significantly improves the performance by a large margin relative to classical IR models, especially for more frequent queries.) <|cite_end|>. In addition, we examine knowledge transfer between first stage retrievers and rerankers with full fine-tuning and adapter-tuning. To the best of our knowledge, this is the first work which studies adapters on sparse retrievers, focuses on sparse models' generalizability and explores knowledge transfer between retrievers in different stages of the retrieval pipeline. In summary, we address the following research questions:
\begin{enumerate}
\item RQ1: What is the efficiency-accuracy trade-off of parameter-efficient fine-tuning with adapters on the sparse retriever model SPLADE?
\item RQ2: How does each adapter layer ablation affect retrieval effectiveness?
\item RQ3: Are adapters effective for adapting neural sparse neural retrieval in a new domain?
\item RQ4: Could adapters be used to share knowledge between rerankers and first stage rankers?
\end{enumerate}
Related Work
Parameter efficient transfer learning techniques aim to adapt large pretrained models to downstream tasks using a fraction of training parameters, achieving comparable effectiveness to full fine-tuning. Such methods <|cite_start|> (Reference: Prefix-Tuning: Optimizing Continuous Prompts for Generation: Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens". We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We find that by learning only 0.1\% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics unseen during training.) <|cite_end|> <|cite_start|> (Reference: LoRA: Low-Rank Adaptation of Large Language Models: An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.) <|cite_end|> <|cite_start|> (Reference: MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer: The main goal behind state-of-the-art pre-trained multilingual models such as multilingual BERT and XLM-R is enabling and bootstrapping NLP applications in low-resource languages through zero-shot or few-shot cross-lingual transfer. However, due to limited model capacity, their transfer performance is the weakest exactly on such low-resource languages and languages unseen during pre-training. We propose MAD-X, an adapter-based framework that enables high portability and parameter-efficient transfer to arbitrary tasks and languages by learning modular language and task representations. In addition, we introduce a novel invertible adapter architecture and a strong baseline method for adapting a pre-trained multilingual model to a new language. MAD-X outperforms the state of the art in cross-lingual transfer across a representative set of typologically diverse languages on named entity recognition and causal commonsense reasoning, and achieves competitive results on question answering. Our code and adapters are available at AdapterHub.ml) <|cite_end|> are memory efficient and scale well to numerous downstream tasks due to the massive reduction in task specific trainable parameters. This makes them an attractive solution for efficient storage and deployment compared to fully fine-tuned instances.
Such methods have been successfully applied to language translation <|cite_start|> (Reference: MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer: The main goal behind state-of-the-art pre-trained multilingual models such as multilingual BERT and XLM-R is enabling and bootstrapping NLP applications in low-resource languages through zero-shot or few-shot cross-lingual transfer. However, due to limited model capacity, their transfer performance is the weakest exactly on such low-resource languages and languages unseen during pre-training. We propose MAD-X, an adapter-based framework that enables high portability and parameter-efficient transfer to arbitrary tasks and languages by learning modular language and task representations. In addition, we introduce a novel invertible adapter architecture and a strong baseline method for adapting a pre-trained multilingual model to a new language. MAD-X outperforms the state of the art in cross-lingual transfer across a representative set of typologically diverse languages on named entity recognition and causal commonsense reasoning, and achieves competitive results on question answering. Our code and adapters are available at AdapterHub.ml) <|cite_end|>, natural language generation <|cite_start|> (Reference: Exploring Versatile Generative Language Model Via Parameter-Efficient Transfer Learning: Fine-tuning pre-trained generative language models to down-stream language generation tasks has shown promising results. However, this comes with the cost of having a single, large model for each task, which is not ideal in low-memory/power scenarios (e.g., mobile). In this paper, we propose an effective way to fine-tune multiple down-stream generation tasks simultaneously using a single, large pre-trained model. The experiments on five diverse language generation tasks show that by just using an additional 2-3% parameters for each task, our model can maintain or even improve the performance of fine-tuning the whole model.) <|cite_end|>, Tabular Question Answering <|cite_start|> (Reference: Parameter-Efficient Abstractive Question Answering over Tables or Text: A long-term ambition of information seeking QA systems is to reason over multi-modal contexts and generate natural answers to user queries. Today, memory intensive pre-trained language models are adapted to downstream tasks such as QA by fine-tuning the model on QA data in a specific modality like unstructured text or structured tables. To avoid training such memory-hungry models while utilizing a uniform architecture for each modality, parameter-efficient adapters add and train small task-specific bottle-neck layers between transformer layers. In this work, we study parameter-efficient abstractive QA in encoder-decoder models over structured tabular data and unstructured textual data using only 1.5% additional parameters for each modality. We also ablate over adapter layers in both encoder and decoder modules to study the efficiency-performance trade-off and demonstrate that reducing additional trainable parameters down to 0.7%-1.0% leads to comparable results. Our models out-perform current state-of-the-art models on tabular QA datasets such as Tablesum and FeTaQA, and achieve comparable performance on a textual QA dataset such as NarrativeQA using significantly less trainable parameters than fine-tuning.) <|cite_end|>, and on the GLUE benchmark <|cite_start|> (Reference: Robust Transfer Learning with Pretrained Language Models through Adapters: Transfer learning with large pretrained transformer-based language models like BERT has become a dominating approach for most NLP tasks. Simply fine-tuning those large language models on downstream tasks or combining it with task-specific pretraining is often not robust. In particular, the performance considerably varies as the random seed changes or the number of pretraining and/or fine-tuning iterations varies, and the fine-tuned model is vulnerable to adversarial attack. We propose a simple yet effective adapter-based approach to mitigate these issues. Specifically, we insert small bottleneck layers (i.e., adapter) within each layer of a pretrained model, then fix the pretrained layers and train the adapter layers on the downstream task data, with (1) task-specific unsupervised pretraining and then (2) task-specific supervised training (e.g., classification, sequence labeling). Our experiments demonstrate that such a training scheme leads to improved stability and adversarial robustness in transfer learning to various downstream tasks.) <|cite_end|>,
In spite of all its advantages and a large research footprint in NLP, parameter-efficient methods remain under-explored in IR.
A recent comprehensive study <|cite_start|> (Reference: Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models: Despite the success, the process of fine-tuning large-scale PLMs brings prohibitive adaptation costs. In fact, fine-tuning all the parameters of a colossal model and retaining separate instances for different tasks are practically infeasible. This necessitates a new branch of research focusing on the parameter-efficient adaptation of PLMs, dubbed as delta tuning in this paper. In contrast with the standard fine-tuning, delta tuning only fine-tunes a small portion of the model parameters while keeping the rest untouched, largely reducing both the computation and storage costs. Recent studies have demonstrated that a series of delta tuning methods with distinct tuned parameter selection could achieve performance on a par with full-parameter fine-tuning, suggesting a new promising way of stimulating large-scale PLMs. In this paper, we first formally describe the problem of delta tuning and then comprehensively review recent delta tuning approaches. We also propose a unified categorization criterion that divide existing delta tuning methods into three groups: addition-based, specification-based, and reparameterization-based methods. Though initially proposed as an efficient method to steer large models, we believe that some of the fascinating evidence discovered along with delta tuning could help further reveal the mechanisms of PLMs and even deep neural networks. To this end, we discuss the theoretical principles underlying the effectiveness of delta tuning and propose frameworks to interpret delta tuning from the perspective of optimization and optimal control, respectively. Furthermore, we provide a holistic empirical study of representative methods, where results on over 100 NLP tasks demonstrate a comprehensive performance comparison of different approaches. The experimental results also cover the analysis of combinatorial, scaling and transferable properties of delta tuning.) <|cite_end|> categorises parameter efficient transfer learning into 3 categories: 1) Addition based 2) Specification based 3) Reparameterization based. Addition based methods insert intermediate modules into the pretrained model. The newly added modules are adapted to the downstream task while keeping the rest of the pretrained model frozen. The modules can be added vertically by increasing the model depth as observed in Houlsby Adapters and Pfeiffer Adapters <|cite_start|> (Reference: MAD-X: An Adapter-Based Framework for Multi-Task Cross-Lingual Transfer: The main goal behind state-of-the-art pre-trained multilingual models such as multilingual BERT and XLM-R is enabling and bootstrapping NLP applications in low-resource languages through zero-shot or few-shot cross-lingual transfer. However, due to limited model capacity, their transfer performance is the weakest exactly on such low-resource languages and languages unseen during pre-training. We propose MAD-X, an adapter-based framework that enables high portability and parameter-efficient transfer to arbitrary tasks and languages by learning modular language and task representations. In addition, we introduce a novel invertible adapter architecture and a strong baseline method for adapting a pre-trained multilingual model to a new language. MAD-X outperforms the state of the art in cross-lingual transfer across a representative set of typologically diverse languages on named entity recognition and causal commonsense reasoning, and achieves competitive results on question answering. Our code and adapters are available at AdapterHub.ml) <|cite_end|>. Houlsby Adapters insert small bottle-neck layers after both the multi-head attention and feed-forward layer of the each transformer layer which are optimized for NLP tasks on GLUE benchmark. Pfeiffer Adapter inserts the bottle-neck layer after only the feed-forward layer and has shown comparable effectiveness to fine-tuning on various NLP tasks. Prompt-based adapter methods such as Prefix-tuning <|cite_start|> (Reference: Prefix-Tuning: Optimizing Continuous Prompts for Generation: Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens". We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We find that by learning only 0.1\% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics unseen during training.) <|cite_end|> prepend continuous task-specific vectors to the input sequence which are optimized as free-parameters. Compacter <|cite_start|> (Reference: Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks: State-of-the-art parameter-efficient fine-tuning methods rely on introducing adapter modules between the layers of a pretrained language model. However, such modules are trained separately for each task and thus do not enable sharing information across tasks. In this paper, we show that we can learn adapter parameters for all layers and tasks by generating them using shared hypernetworks, which condition on task, adapter position, and layer id in a transformer model. This parameter-efficient multi-task learning framework allows us to achieve the best of both worlds by sharing knowledge across tasks via hypernetworks while enabling the model to adapt to each individual task through task-specific adapters. Experiments on the well-known GLUE benchmark show improved performance in multi-task learning while adding only 0.29% parameters per task. We additionally demonstrate substantial performance improvements in few-shot domain generalization across a variety of tasks. Our code is publicly available in https://github.com/rabeehk/hyperformer.) <|cite_end|> hypothesizes that the model can be optimized by learning transformations of the bottle-neck layer in a low-rank subspace leading to less parameters.
Specification based methods fine-tune only a subset of pretrained model parameters to the task-at-hand while keeping the rest of the model frozen. The fine-tuned model parameters can be only the bias terms as observed in BitFit <|cite_start|> (Reference: “{B: 2010年1月22日,享有世界性声誉的英国《皇家学会会志B辑》以专辑的形式发表了基于中国化石材料的20篇古生物学论文。这是该刊首次出版的古生物学专辑。这些论文代表近年来飞速发展的中国古生物学取得的巨大成就的一部分。) <|cite_end|>, or only cross-attention weights as in the case of Seq2Seq models with X-Attention. Re-parameterization methods transform the pretrained weights into parameter efficient form during training. This is observed in LoRA <|cite_start|> (Reference: LoRA: Low-Rank Adaptation of Large Language Models: An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.) <|cite_end|> which optimises rank decomposition matrices of pretrained layer while keeping the original layer frozen.
Recent studies exploring parameter efficient transfer learning for Information Retrieval show promising results of such techniques for dense retrieval models <|cite_start|> (Reference: Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight Fine-Tuning: A BERT-based Neural Ranking Model (NRM) can be either a crossencoder or a bi-encoder. Between the two, bi-encoder is highly efficient because all the documents can be pre-processed before the actual query time. In this work, we show two approaches for improving the performance of BERT-based bi-encoders. The first approach is to replace the full fine-tuning step with a lightweight fine-tuning. We examine lightweight fine-tuning methods that are adapter-based, prompt-based, and hybrid of the two. The second approach is to develop semi-Siamese models where queries and documents are handled with a limited amount of difference. The limited difference is realized by learning two lightweight fine-tuning modules, where the main language model of BERT is kept common for both query and document. We provide extensive experiment results for monoBERT, TwinBERT, and ColBERT where three performance metrics are evaluated over Robust04, ClueWeb09b, and MS-MARCO datasets. The results confirm that both lightweight fine-tuning and semi-Siamese are considerably helpful for improving BERT-based bi-encoders. In fact, lightweight fine-tuning is helpful for crossencoder, too) <|cite_end|> <|cite_start|> (Reference: Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval: State-of-the-art neural (re)rankers are notoriously data-hungry which -- given the lack of large-scale training data in languages other than English -- makes them rarely used in multilingual and cross-lingual retrieval settings. Current approaches therefore commonly transfer rankers trained on English data to other languages and cross-lingual setups by means of multilingual encoders: they fine-tune all parameters of pretrained massively multilingual Transformers (MMTs, e.g., multilingual BERT) on English relevance judgments, and then deploy them in the target language(s). In this work, we show that two parameter-efficient approaches to cross-lingual transfer, namely Sparse Fine-Tuning Masks (SFTMs) and Adapters, allow for a more lightweight and more effective zero-shot transfer to multilingual and cross-lingual retrieval tasks. We first train language adapters (or SFTMs) via Masked Language Modelling and then train retrieval (i.e., reranking) adapters (SFTMs) on top, while keeping all other parameters fixed. At inference, this modular design allows us to compose the ranker by applying the (re)ranking adapter (or SFTM) trained with source language data together with the language adapter (or SFTM) of a target language. We carry out a large scale evaluation on the CLEF-2003 and HC4 benchmarks and additionally, as another contribution, extend the former with queries in three new languages: Kyrgyz, Uyghur and Turkish. The proposed parameter-efficient methods outperform standard zero-shot transfer with full MMT fine-tuning, while being more modular and reducing training times. The gains are particularly pronounced for low-resource languages, where our approaches also substantially outperform the competitive machine translation-based rankers.) <|cite_end|> <|cite_start|> (Reference: Scattered or Connected? An Optimized Parameter-efficient Tuning Approach for Information Retrieval: Pre-training and fine-tuning have achieved significant advances in the information retrieval (IR). A typical approach is to fine-tune all the parameters of large-scale pre-trained models (PTMs) on downstream tasks. As the model size and the number of tasks increase greatly, such approach becomes less feasible and prohibitively expensive. Recently, a variety of parameter-efficient tuning methods have been proposed in natural language processing (NLP) that only fine-tune a small number of parameters while still attaining strong performance. Yet there has been little effort to explore parameter-efficient tuning for IR. In this work, we first conduct a comprehensive study of existing parameter-efficient tuning methods at both the retrieval and re-ranking stages. Unlike the promising results in NLP, we find that these methods cannot achieve comparable performance to full fine-tuning at both stages when updating less than 1\% of the original model parameters. More importantly, we find that the existing methods are just parameter-efficient, but not learning-efficient as they suffer from unstable training and slow convergence. To analyze the underlying reason, we conduct a theoretical analysis and show that the separation of the inserted trainable modules makes the optimization difficult. To alleviate this issue, we propose to inject additional modules alongside the \acp{PTM} to make the original scattered modules connected. In this way, all the trainable modules can form a pathway to smooth the loss surface and thus help stabilize the training process. Experiments at both retrieval and re-ranking stages show that our method outperforms existing parameter-efficient methods significantly, and achieves comparable or even better performance over full fine-tuning.) <|cite_end|> <|cite_start|> (Reference: Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers: Prompt tuning attempts to update few task-specific parameters in pre-trained models. It has achieved comparable performance to fine-tuning of the full parameter set on both language understanding and generation tasks. In this work, we study the problem of prompt tuning for neural text retrievers. We introduce parameter-efficient prompt tuning for text retrieval across in-domain, cross-domain, and cross-topic settings. Through an extensive analysis, we show that the strategy can mitigate the two issues -- parameter-inefficiency and weak generalizability -- faced by fine-tuning based retrieval methods. Notably, it can significantly improve the out-of-domain zero-shot generalization of the retrieval models. By updating only 0.1% of the model parameters, the prompt tuning strategy can help retrieval models achieve better generalization performance than traditional methods in which all parameters are updated. Finally, to facilitate research on retrievers' cross-topic generalizability, we curate and release an academic retrieval dataset with 18K query-results pairs in 87 topics, making it the largest topic-specific one to date.) <|cite_end|>. <|cite_start|> (Reference: Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight Fine-Tuning: A BERT-based Neural Ranking Model (NRM) can be either a crossencoder or a bi-encoder. Between the two, bi-encoder is highly efficient because all the documents can be pre-processed before the actual query time. In this work, we show two approaches for improving the performance of BERT-based bi-encoders. The first approach is to replace the full fine-tuning step with a lightweight fine-tuning. We examine lightweight fine-tuning methods that are adapter-based, prompt-based, and hybrid of the two. The second approach is to develop semi-Siamese models where queries and documents are handled with a limited amount of difference. The limited difference is realized by learning two lightweight fine-tuning modules, where the main language model of BERT is kept common for both query and document. We provide extensive experiment results for monoBERT, TwinBERT, and ColBERT where three performance metrics are evaluated over Robust04, ClueWeb09b, and MS-MARCO datasets. The results confirm that both lightweight fine-tuning and semi-Siamese are considerably helpful for improving BERT-based bi-encoders. In fact, lightweight fine-tuning is helpful for crossencoder, too) <|cite_end|> studies parameter efficient prefix-tuning, <|cite_start|> (Reference: Prefix-Tuning: Optimizing Continuous Prompts for Generation: Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens". We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We find that by learning only 0.1\% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics unseen during training.) <|cite_end|> and LoRA <|cite_start|> (Reference: LoRA: Low-Rank Adaptation of Large Language Models: An important paradigm of natural language processing consists of large-scale pre-training on general domain data and adaptation to particular tasks or domains. As we pre-train larger models, full fine-tuning, which retrains all model parameters, becomes less feasible. Using GPT-3 175B as an example -- deploying independent instances of fine-tuned models, each with 175B parameters, is prohibitively expensive. We propose Low-Rank Adaptation, or LoRA, which freezes the pre-trained model weights and injects trainable rank decomposition matrices into each layer of the Transformer architecture, greatly reducing the number of trainable parameters for downstream tasks. Compared to GPT-3 175B fine-tuned with Adam, LoRA can reduce the number of trainable parameters by 10,000 times and the GPU memory requirement by 3 times. LoRA performs on-par or better than fine-tuning in model quality on RoBERTa, DeBERTa, GPT-2, and GPT-3, despite having fewer trainable parameters, a higher training throughput, and, unlike adapters, no additional inference latency. We also provide an empirical investigation into rank-deficiency in language model adaptation, which sheds light on the efficacy of LoRA. We release a package that facilitates the integration of LoRA with PyTorch models and provide our implementations and model checkpoints for RoBERTa, DeBERTa, and GPT-2 at https://github.com/microsoft/LoRA.) <|cite_end|> on bi-encoder and cross-encoder dense models. Additionally, they combine the two methods by sequentially optimizing one method for \emph{m} epochs, freezing it and optimizing the other for \emph{n} epochs. Their studies show that while cross-encoders with LoRA and LoRA+(50\% more parameters compared to LoRA) outperform fine-tuning with TwinBERT <|cite_start|> (Reference: {TwinBERT: Distilling Knowledge to Twin-Structured Compressed BERT Models for Large-Scale Retrieval: Pre-trained language models have achieved great success in a wide variety of natural language processing (NLP) tasks, while the superior performance comes with high demand in computational resources, which hinders the application in low-latency information retrieval (IR) systems. To address the problem, we present TwinBERT model, which has two improvements: 1) represent query and document separately using twin-structured encoders and 2) each encoder is a highly compressed BERT-like model with less than one third of the parameters. The former allows document embeddings to be pre-computed offline and cached in memory, which is different from BERT, where the two input sentences are concatenated and encoded together. The change saves large amount of computation time, however, it is still not sufficient for real-time retrieval considering the complexity of BERT model itself. To further reduce computational cost, a compressed multi-layer transformer encoder is proposed with special training strategies as a substitution of the original complex BERT encoder. Lastly, two versions of TwinBERT are developed to combine the query and keyword embeddings for retrieval and relevance tasks correspondingly. Both of them have met the real-time latency requirement and achieve close or on-par performance to BERT-Base model. The models were trained following the teacher-student framework and evaluated with data from one of the major search engines. Experimental results showed that the inference time was significantly reduced and was for the first time controlled within 20ms on CPUs while at the same time the performance gain from fine-tuned BERT-Base model was mostly retained. Integration of the models in production systems also demonstrated remarkable improvements on relevance metrics with negligible influence on latency. The models were released in 2019 with significant production impacts.) <|cite_end|> and ColBERT <|cite_start|> (Reference: ColBERT: Efficient and Effective Passage Search via Contextualized Late Interaction over BERT: Recent progress in Natural Language Understanding (NLU) is driving fast-paced advances in Information Retrieval (IR), largely owed to fine-tuning deep language models (LMs) for document ranking. While remarkably effective, the ranking models based on these LMs increase computational cost by orders of magnitude over prior approaches, particularly as they must feed each query-document pair through a massive neural network to compute a single relevance score. To tackle this, we present ColBERT, a novel ranking model that adapts deep LMs (in particular, BERT) for efficient retrieval. ColBERT introduces a late interaction architecture that independently encodes the query and the document using BERT and then employs a cheap yet powerful interaction step that models their fine-grained similarity. By delaying and yet retaining this fine-granular interaction, ColBERT can leverage the expressiveness of deep LMs while simultaneously gaining the ability to pre-compute document representations offline, considerably speeding up query processing. Beyond reducing the cost of re-ranking the documents retrieved by a traditional model, ColBERT's pruning-friendly interaction mechanism enables leveraging vector-similarity indexes for end-to-end retrieval directly from a large document collection. We extensively evaluate ColBERT using two recent passage search datasets. Results show that ColBERT's effectiveness is competitive with existing BERT-based models (and outperforms every non-BERT baseline), while executing two orders-of-magnitude faster and requiring four orders-of-magnitude fewer FLOPs per query.) <|cite_end|>, parameter-efficient methods \textit{do not outperform fine-tuning} for bi-encoders across all datasets. <|cite_start|> (Reference: Parameter-Efficient Neural Reranking for Cross-Lingual and Multilingual Retrieval: State-of-the-art neural (re)rankers are notoriously data-hungry which -- given the lack of large-scale training data in languages other than English -- makes them rarely used in multilingual and cross-lingual retrieval settings. Current approaches therefore commonly transfer rankers trained on English data to other languages and cross-lingual setups by means of multilingual encoders: they fine-tune all parameters of pretrained massively multilingual Transformers (MMTs, e.g., multilingual BERT) on English relevance judgments, and then deploy them in the target language(s). In this work, we show that two parameter-efficient approaches to cross-lingual transfer, namely Sparse Fine-Tuning Masks (SFTMs) and Adapters, allow for a more lightweight and more effective zero-shot transfer to multilingual and cross-lingual retrieval tasks. We first train language adapters (or SFTMs) via Masked Language Modelling and then train retrieval (i.e., reranking) adapters (SFTMs) on top, while keeping all other parameters fixed. At inference, this modular design allows us to compose the ranker by applying the (re)ranking adapter (or SFTM) trained with source language data together with the language adapter (or SFTM) of a target language. We carry out a large scale evaluation on the CLEF-2003 and HC4 benchmarks and additionally, as another contribution, extend the former with queries in three new languages: Kyrgyz, Uyghur and Turkish. The proposed parameter-efficient methods outperform standard zero-shot transfer with full MMT fine-tuning, while being more modular and reducing training times. The gains are particularly pronounced for low-resource languages, where our approaches also substantially outperform the competitive machine translation-based rankers.) <|cite_end|> uses parameter-efficient techniques such as Sparse Fine-Tuning Masks and Adapters for multilingual and cross-lingual retrieval tasks with rerankers. They train language adapters with Masked Language Modeling (MLM hereafter) task and then task-specific retrieval adapters. This enables the fusion of reranking adapter trained with source language data together with the language adapter of the target language. Concurrent to our work, <|cite_start|> (Reference: Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers: Prompt tuning attempts to update few task-specific parameters in pre-trained models. It has achieved comparable performance to fine-tuning of the full parameter set on both language understanding and generation tasks. In this work, we study the problem of prompt tuning for neural text retrievers. We introduce parameter-efficient prompt tuning for text retrieval across in-domain, cross-domain, and cross-topic settings. Through an extensive analysis, we show that the strategy can mitigate the two issues -- parameter-inefficiency and weak generalizability -- faced by fine-tuning based retrieval methods. Notably, it can significantly improve the out-of-domain zero-shot generalization of the retrieval models. By updating only 0.1% of the model parameters, the prompt tuning strategy can help retrieval models achieve better generalization performance than traditional methods in which all parameters are updated. Finally, to facilitate research on retrievers' cross-topic generalizability, we curate and release an academic retrieval dataset with 18K query-results pairs in 87 topics, making it the largest topic-specific one to date.) <|cite_end|> studies parameter-efficient prompt tuning techniques such as Prefix tuning and P-tuning v2, specification based methods such as BitFit and adapter-tuning with Pfeiffer Adapters on late interaction bi-encoder models such as Dense Passage Retrieval <|cite_start|> (Reference: Dense Passage Retrieval for Open-Domain Question Answering: Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks.) <|cite_end|> and ColBERT. They are motivated by cross-domain generalization of dense retrievals and achieve better results with P-tuning compared to fine-tuning on the BEIR benchmark. <|cite_start|> (Reference: Scattered or Connected? An Optimized Parameter-efficient Tuning Approach for Information Retrieval: Pre-training and fine-tuning have achieved significant advances in the information retrieval (IR). A typical approach is to fine-tune all the parameters of large-scale pre-trained models (PTMs) on downstream tasks. As the model size and the number of tasks increase greatly, such approach becomes less feasible and prohibitively expensive. Recently, a variety of parameter-efficient tuning methods have been proposed in natural language processing (NLP) that only fine-tune a small number of parameters while still attaining strong performance. Yet there has been little effort to explore parameter-efficient tuning for IR. In this work, we first conduct a comprehensive study of existing parameter-efficient tuning methods at both the retrieval and re-ranking stages. Unlike the promising results in NLP, we find that these methods cannot achieve comparable performance to full fine-tuning at both stages when updating less than 1\% of the original model parameters. More importantly, we find that the existing methods are just parameter-efficient, but not learning-efficient as they suffer from unstable training and slow convergence. To analyze the underlying reason, we conduct a theoretical analysis and show that the separation of the inserted trainable modules makes the optimization difficult. To alleviate this issue, we propose to inject additional modules alongside the \acp{PTM} to make the original scattered modules connected. In this way, all the trainable modules can form a pathway to smooth the loss surface and thus help stabilize the training process. Experiments at both retrieval and re-ranking stages show that our method outperforms existing parameter-efficient methods significantly, and achieves comparable or even better performance over full fine-tuning.) <|cite_end|> studies various
parameter-efficient tuning procedures at both retrieval and re-
ranking stages. They conduct a comprehensive study of parameter-efficient techniques such as BitFit, Prefix-tuning, Adapters, LoRA, MAM adapters with dense bi-encoders and cross-encoders with BERT-base as the backbone model. Their parameter-efficient techniques achieve comparable effectiveness to fine-tuning on top-20 retrieval accuracy and marginal gains on top-100 retrieval accuracy.
Compared to prior works, our experiments first study the use of adapters for state of the art sparse models such as SPLADE, contrary to previous work that studied dense bi-encoder models\footnote{To the best of our knowledge the only work involving SPLADE and adapters/freezing layers is <|cite_start|> (Reference: Sparsifying Sparse Representations for Passage Retrieval by Top-$ k $ Masking: Sparse lexical representation learning has demonstrated much progress in improving passage retrieval effectiveness in recent models such as DeepImpact, uniCOIL, and SPLADE. This paper describes a straightforward yet effective approach for sparsifying lexical representations for passage retrieval, building on SPLADE by introducing a top-$k$ masking scheme to control sparsity and a self-learning method to coax masked representations to mimic unmasked representations. A basic implementation of our model is competitive with more sophisticated approaches and achieves a good balance between effectiveness and efficiency. The simplicity of our methods opens the door for future explorations in lexical representation learning for passage retrieval.) <|cite_end|>, which found that freezing the embeddings improves effectiveness.}. Furthermore, our results show improvements compared to the previous studies. We also studied the case of using distinct adapters for query and document encoders in a ``bi-adapter'' setting where the same pretrained backbone model is used by both the query and the document encoder but different adapters are trained for the queries and documents. Secondly, we address another research questions ignored by previous work, which is efficient domain adaptation\footnote{Here we use adaptation as further finetuning on the target domain.} for neural first stage rankers. We start from a trained neural ranker and study adaptation with adapters on a different domain, such as the ones present in the BEIR benchmark. Finally, we also study parameters sharing between rerankers and first stage rankers using adapters, which to our knowledge has not been studied yet. <|paper_end|> | [
"<|reference_start|> BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models: Existing neural information retrieval (IR) models have often been studied in homogeneous and narrow settings, which has considerably limited insights into their out-of-distribution (OOD) generalization capabilities. To address this, and to facilitate researchers to broadly evaluate the effectiveness of their models, we introduce Benchmarking-IR (BEIR), a robust and heterogeneous evaluation benchmark for information retrieval. We leverage a careful selection of 18 publicly available datasets from diverse text retrieval tasks and domains and evaluate 10 state-of-the-art retrieval systems including lexical, sparse, dense, late-interaction and re-ranking architectures on the BEIR benchmark. Our results show BM25 is a robust baseline and re-ranking and late-interaction-based models on average achieve the best zero-shot performances, however, at high computational costs. In contrast, dense and sparse-retrieval models are computationally more efficient but often underperform other approaches, highlighting the considerable room for improvement in their generalization capabilities. We hope this framework allows us to better evaluate and understand existing retrieval systems, and contributes to accelerating progress towards better robust and generalizable systems in the future. BEIR is publicly available at https://github.com/UKPLab/beir. <|reference_end|>",
"<|reference_start|> Semi-Siamese Bi-encoder Neural Ranking Model Using Lightweight Fine-Tuning: A BERT-based Neural Ranking Model (NRM) can be either a crossencoder or a bi-encoder. Between the two, bi-encoder is highly efficient because all the documents can be pre-processed before the actual query time. In this work, we show two approaches for improving the performance of BERT-based bi-encoders. The first approach is to replace the full fine-tuning step with a lightweight fine-tuning. We examine lightweight fine-tuning methods that are adapter-based, prompt-based, and hybrid of the two. The second approach is to develop semi-Siamese models where queries and documents are handled with a limited amount of difference. The limited difference is realized by learning two lightweight fine-tuning modules, where the main language model of BERT is kept common for both query and document. We provide extensive experiment results for monoBERT, TwinBERT, and ColBERT where three performance metrics are evaluated over Robust04, ClueWeb09b, and MS-MARCO datasets. The results confirm that both lightweight fine-tuning and semi-Siamese are considerably helpful for improving BERT-based bi-encoders. In fact, lightweight fine-tuning is helpful for crossencoder, too <|reference_end|>",
"<|reference_start|> {TwinBERT: Distilling Knowledge to Twin-Structured Compressed BERT Models for Large-Scale Retrieval: Pre-trained language models have achieved great success in a wide variety of natural language processing (NLP) tasks, while the superior performance comes with high demand in computational resources, which hinders the application in low-latency information retrieval (IR) systems. To address the problem, we present TwinBERT model, which has two improvements: 1) represent query and document separately using twin-structured encoders and 2) each encoder is a highly compressed BERT-like model with less than one third of the parameters. The former allows document embeddings to be pre-computed offline and cached in memory, which is different from BERT, where the two input sentences are concatenated and encoded together. The change saves large amount of computation time, however, it is still not sufficient for real-time retrieval considering the complexity of BERT model itself. To further reduce computational cost, a compressed multi-layer transformer encoder is proposed with special training strategies as a substitution of the original complex BERT encoder. Lastly, two versions of TwinBERT are developed to combine the query and keyword embeddings for retrieval and relevance tasks correspondingly. Both of them have met the real-time latency requirement and achieve close or on-par performance to BERT-Base model. The models were trained following the teacher-student framework and evaluated with data from one of the major search engines. Experimental results showed that the inference time was significantly reduced and was for the first time controlled within 20ms on CPUs while at the same time the performance gain from fine-tuned BERT-Base model was mostly retained. Integration of the models in production systems also demonstrated remarkable improvements on relevance metrics with negligible influence on latency. The models were released in 2019 with significant production impacts. <|reference_end|>",
"<|reference_start|> Dense Passage Retrieval for Open-Domain Question Answering: Open-domain question answering relies on efficient passage retrieval to select candidate contexts, where traditional sparse vector space models, such as TF-IDF or BM25, are the de facto method. In this work, we show that retrieval can be practically implemented using dense representations alone, where embeddings are learned from a small number of questions and passages by a simple dual-encoder framework. When evaluated on a wide range of open-domain QA datasets, our dense retriever outperforms a strong Lucene-BM25 system largely by 9%-19% absolute in terms of top-20 passage retrieval accuracy, and helps our end-to-end QA system establish new state-of-the-art on multiple open-domain QA benchmarks. <|reference_end|>"
] | [
8,
23,
30,
34
] | {"<|multi_cite_1_1|>": "arxiv-361577", "<|multi_cite_1_2|>": "ss-876268", "<|multi_cite_1_4|>": "arxiv-349236", "<|cite_2|>": "arxiv-377468", "<|cite_3|>": "arxiv-411073", "<|cite_4|>": "arxiv-418575", "<|cite_5|>": "arxiv-111232", "<|cite_6|>": "arxiv-335662", "<|cite_7|>": "ss-1355711", "<|cite_8|>": "arxiv-327334", "<|multi_cite_9_2|>": "arxiv-313097", "<|multi_cite_9_3|>": "arxiv-349236", "<|multi_cite_9_4|>": "arxiv-262706", "<|cite_10|>": "arxiv-262706", "<|cite_11|>": "arxiv-258214", "<|cite_12|>": "arxiv-411556", "<|multi_cite_13_1|>": "arxiv-359153", "<|cite_14|>": "arxiv-405321", "<|cite_16|>": "arxiv-262706", "<|cite_17|>": "arxiv-313097", "<|cite_18|>": "arxiv-346780", "<|cite_19|>": "ss-876268", "<|cite_21|>": "arxiv-349236", "<|multi_cite_22_1|>": "arxiv-377468", "<|multi_cite_22_2|>": "arxiv-411073", "<|multi_cite_22_3|>": "arxiv-441366", "<|multi_cite_22_4|>": "arxiv-433905", "<|cite_23|>": "arxiv-377468", "<|cite_24|>": "arxiv-313097", "<|cite_25|>": "arxiv-349236", "<|cite_26|>": "ss-1077477", "<|cite_27|>": "arxiv-261769", "<|cite_28|>": "arxiv-411073", "<|cite_29|>": "arxiv-433905", "<|cite_30|>": "arxiv-258633", "<|cite_31|>": "arxiv-441366", "<|cite_32|>": "ss-726865"} |
2006.15098 | <|paper_start|> Title: The Ramifications of Making Deep Neural Networks Compact
Abstract: The Ramifications of Making Deep Neural Networks Compact: The recent trend in deep neural networks (DNNs) research is to make the networks more compact. The motivation behind designing compact DNNs is to improve energy efficiency since by virtue of having lower memory footprint, compact DNNs have lower number of off-chip accesses which improves energy efficiency. However, we show that making DNNs compact has indirect and subtle implications which are not well-understood. Reducing the number of parameters in DNNs increases the number of activations which, in turn, increases the memory footprint. We evaluate several recently-proposed compact DNNs on Tesla P100 GPU and show that their "activations to parameters ratio" ranges between 1.4 to 32.8. Further, the "memory-footprint to model size ratio" ranges between 15 to 443. This shows that a higher number of activations causes large memory footprint which increases on-chip/off-chip data movements. Furthermore, these parameter-reducing techniques reduce the arithmetic intensity which increases on-chip/off-chip memory bandwidth requirement. Due to these factors, the energy efficiency of compact DNNs may be significantly reduced which is against the original motivation for designing compact DNNs.
Introduction
\label{sec:introduction}
Deep neural networks (DNNs) have shown phenomenal results in various domains such as image classification and object detection, etc. <|cite_start|> (Reference: ImageNet classification with deep convolutional neural
networks: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.) <|cite_end|> <|cite_start|> (Reference: Very Deep Convolutional Networks for Large-Scale Image Recognition: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.) <|cite_end|> <|cite_start|> (Reference: Going Deeper with Convolutions: We propose a deep convolutional neural network architecture codenamed "Inception", which was responsible for setting the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC 2014). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. This was achieved by a carefully crafted design that allows for increasing the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC 2014 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.) <|cite_end|>.
After the success of AlexNet <|cite_start|> (Reference: ImageNet classification with deep convolutional neural
networks: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.) <|cite_end|>, to improve accuracy, researchers have proposed even deeper <|cite_start|> (Reference: Very Deep Convolutional Networks for Large-Scale Image Recognition: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.) <|cite_end|> <|cite_start|> (Reference: Identity Mappings in Deep Residual Networks: Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https://github.com/KaimingHe/resnet-1k-layers) <|cite_end|> and wider <|cite_start|> (Reference: Aggregated Residual Transformations for Deep Neural Networks: We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.) <|cite_end|> <|cite_start|> (Reference: Rethinking the Inception Architecture for Computer Vision: Convolutional networks are at the core of most state-of-the-art computer vision solutions for a wide variety of tasks. Since 2014 very deep convolutional networks started to become mainstream, yielding substantial gains in various benchmarks. Although increased model size and computational cost tend to translate to immediate quality gains for most tasks (as long as enough labeled data is provided for training), computational efficiency and low parameter count are still enabling factors for various use cases such as mobile vision and big-data scenarios. Here we explore ways to scale up networks in ways that aim at utilizing the added computation as efficiently as possible by suitably factorized convolutions and aggressive regularization. We benchmark our methods on the ILSVRC 2012 classification challenge validation set demonstrate substantial gains over the state of the art: 21.2% top-1 and 5.6% top-5 error for single frame evaluation using a network with a computational cost of 5 billion multiply-adds per inference and with using less than 25 million parameters. With an ensemble of 4 models and multi-crop evaluation, we report 3.5% top-5 error on the validation set (3.6% error on the test set) and 17.3% top-1 error on the validation set.) <|cite_end|> <|cite_start|> (Reference: Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning: Very deep convolutional networks have been central to the largest advances in image recognition performance in recent years. One example is the Inception architecture that has been shown to achieve very good performance at relatively low computational cost. Recently, the introduction of residual connections in conjunction with a more traditional architecture has yielded state-of-the-art performance in the 2015 ILSVRC challenge; its performance was similar to the latest generation Inception-v3 network. This raises the question of whether there are any benefit in combining the Inception architecture with residual connections. Here we give clear empirical evidence that training with residual connections accelerates the training of Inception networks significantly. There is also some evidence of residual Inception networks outperforming similarly expensive Inception networks without residual connections by a thin margin. We also present several new streamlined architectures for both residual and non-residual Inception networks. These variations improve the single-frame recognition performance on the ILSVRC 2012 classification task significantly. We further demonstrate how proper activation scaling stabilizes the training of very wide residual Inception networks. With an ensemble of three residual and one Inception-v4, we achieve 3.08 percent top-5 error on the test set of the ImageNet classification (CLS) challenge) <|cite_end|> networks which are deemed as over-parameterized\blfootnote{Support for this work was provided by Science and Engineering Research Board (SERB), India, award number ECR/2017/000622.}. These networks have huge compute, memory and power demands which hinders their deployment on resource-constrained embedded and mobile devices <|cite_start|> (Reference: A survey of FPGA-based accelerators for convolutional neural networks: ) <|cite_end|>.
To enable the deployment of DNNs on resource-constrained platforms, researchers have proposed two types of heuristics: (1) compressing the existing over-parameterized deeper and wider networks and (2) designing new algorithms which have very few parameters i.e. compact models. For example, Han et al. <|cite_start|> (Reference: Deep Compression: Compressing Deep Neural Networks with Pruning, Trained Quantization and Huffman Coding: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems with limited hardware resources. To address this limitation, we introduce "deep compression", a three stage pipeline: pruning, trained quantization and Huffman coding, that work together to reduce the storage requirement of neural networks by 35x to 49x without affecting their accuracy. Our method first prunes the network by learning only the important connections. Next, we quantize the weights to enforce weight sharing, finally, we apply Huffman coding. After the first two steps we retrain the network to fine tune the remaining connections and the quantized centroids. Pruning, reduces the number of connections by 9x to 13x; Quantization then reduces the number of bits that represent each connection from 32 to 5. On the ImageNet dataset, our method reduced the storage required by AlexNet by 35x, from 240MB to 6.9MB, without loss of accuracy. Our method reduced the size of VGG-16 by 49x from 552MB to 11.3MB, again with no loss of accuracy. This allows fitting the model into on-chip SRAM cache rather than off-chip DRAM memory. Our compression method also facilitates the use of complex neural networks in mobile applications where application size and download bandwidth are constrained. Benchmarked on CPU, GPU and mobile GPU, compressed network has 3x to 4x layerwise speedup and 3x to 7x better energy efficiency.) <|cite_end|> propose magnitude-based pruning of filter weights and Yang et al. <|cite_start|> (Reference: Designing Energy-Efficient Convolutional Neural Networks using Energy-Aware Pruning: Deep convolutional neural networks (CNNs) are indispensable to state-of-the-art computer vision algorithms. However, they are still rarely deployed on battery-powered mobile devices, such as smartphones and wearable gadgets, where vision algorithms can enable many revolutionary real-world applications. The key limiting factor is the high energy consumption of CNN processing due to its high computational complexity. While there are many previous efforts that try to reduce the CNN model size or amount of computation, we find that they do not necessarily result in lower energy consumption, and therefore do not serve as a good metric for energy cost estimation. To close the gap between CNN design and energy consumption optimization, we propose an energy-aware pruning algorithm for CNNs that directly uses energy consumption estimation of a CNN to guide the pruning process. The energy estimation methodology uses parameters extrapolated from actual hardware measurements that target realistic battery-powered system setups. The proposed layer-by-layer pruning algorithm also prunes more aggressively than previously proposed pruning methods by minimizing the error in output feature maps instead of filter weights. For each layer, the weights are first pruned and then locally fine-tuned with a closed-form least-square solution to quickly restore the accuracy. After all layers are pruned, the entire network is further globally fine-tuned using back-propagation. With the proposed pruning method, the energy consumption of AlexNet and GoogLeNet are reduced by 3.7x and 1.6x, respectively, with less than 1% top-5 accuracy loss. Finally, we show that pruning the AlexNet with a reduced number of target classes can greatly decrease the number of weights but the energy reduction is limited. Energy modeling tool and energy-aware pruned models available at http://eyeriss.mit.edu/energy.html) <|cite_end|> propose energy-aware pruning to improve energy efficiency. These pruning methods, however, require exhaustive retraining to achieve the accuracy of the original pre-trained model.
Compact DNNs have the advantage that they avoid retraining overheads and after training, they can be directly deployed on resource constrained devices. The current trends in designing compact DNNs is to reduce the number of parameters and computations by leveraging error-tolerance of DNN application domains. Reducing the number of parameters helps in fitting the network in limited on-chip memory and avoids expensive off-chip access. This makes the DNNs energy efficient. The computational cost of a DNN is measured in terms of number of MAC (multiply-accumulate) operations performed in conv and FC layers.
Since measuring the memory footprint and total energy consumption is not straightforward, researchers generally use number of parameters and number of MACs (respectively) as their proxies <|cite_start|> (Reference: NetAdapt: Platform-Aware Neural Network Adaptation for Mobile Applications: This work proposes an algorithm, called NetAdapt, that automatically adapts a pre-trained deep neural network to a mobile platform given a resource budget. While many existing algorithms simplify networks based on the number of MACs or weights, optimizing those indirect metrics may not necessarily reduce the direct metrics, such as latency and energy consumption. To solve this problem, NetAdapt incorporates direct metrics into its adaptation algorithm. These direct metrics are evaluated using empirical measurements, so that detailed knowledge of the platform and toolchain is not required. NetAdapt automatically and progressively simplifies a pre-trained network until the resource budget is met while maximizing the accuracy. Experiment results show that NetAdapt achieves better accuracy versus latency trade-offs on both mobile CPU and mobile GPU, compared with the state-of-the-art automated network simplification algorithms. For image classification on the ImageNet dataset, NetAdapt achieves up to a 1.7$\times$ speedup in measured inference latency with equal or higher accuracy on MobileNets (V1&V2).) <|cite_end|>. However, this approach has crucial limitations. The total memory footprint is sum of the (1) size of weights (2) size of activations and (3) gradients corresponding to activations and parameters. Hence it depends on both the number of parameters and activations as well. However, since the number of activations cannot be estimated from the number of parameters, the number of parameters is not a good indicator of memory footprint. Further, as shown in Figure \ref{fig:MAC}, one MAC operation requires three read and one write operations. Hence, the energy consumed in each MAC operation depends on (1) the location of the operands in the memory hierarchy, such as register file, cache or main memory which decides the operand fetch energy <|cite_start|> (Reference: DESTINY: A Comprehensive Tool with 3D and Multi-Level Cell Memory Modeling Capability: To enable the design of large capacity memory structures, novel memory technologies such as non-volatile memory (NVM) and novel fabrication approaches, e.g., 3D stacking and multi-level cell (MLC) design have been explored. The existing modeling tools, however, cover only few memory technologies, technology nodes and fabrication approaches. We present DESTINY, a tool for modeling 2D/3D memories designed using SRAM, resistive RAM (ReRAM), spin transfer torque RAM (STT-RAM), phase change RAM (PCM) and embedded DRAM (eDRAM) and 2D memories designed using spin orbit torque RAM (SOT-RAM), domain wall memory (DWM) and Flash memory. In addition to single-level cell (SLC) designs for all these memories, DESTINY also supports modeling MLC designs for NVMs. We have extensively validated DESTINY against commercial and research prototypes of these memories. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g. latency, area or energy-delay product) for a given memory technology, choosing the suitable memory technology or fabrication method (i.e. 2D v/s 3D) for a given optimization target, etc. We believe that DESTINY will boost studies of next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers.) <|cite_end|> and (2) the type of convolution such as $3\times3$, $1\times1$ or depth-wise separable convolution which decides the degree of reuse <|cite_start|> (Reference: Not All Ops Are Created Equal!: Efficient and compact neural network models are essential for enabling the deployment on mobile and embedded devices. In this work, we point out that typical design metrics for gauging the efficiency of neural network architectures -- total number of operations and parameters -- are not sufficient. These metrics may not accurately correlate with the actual deployment metrics such as energy and memory footprint. We show that throughput and energy varies by up to 5X across different neural network operation types on an off-the-shelf Arm Cortex-M7 microcontroller. Furthermore, we show that the memory required for activation data also need to be considered, apart from the model parameters, for network architecture exploration studies.) <|cite_end|>. Hence, the number of MAC is not an accurate indicator of the energy consumption of a DNN.
\begin{figure}[htbp]
\begin{center}
\fbox{\includegraphics[scale=0.5]{Figure/MAC.pdf}}
\caption{Illustration of a MAC operation. }
\label{fig:MAC}
\end{center}
\end{figure}
In this paper, we study state-of-the-art compact DNNs and analyze the unforeseen implications of reducing the number of parameters. For example, 1.0-G-SqNxt-23, a variant of SqueezeNext <|cite_start|> (Reference: SqueezeNext: Hardware-Aware Neural Network Design: One of the main barriers for deploying neural networks on embedded systems has been large memory and power consumption of existing neural networks. In this work, we introduce SqueezeNext, a new family of neural network architectures whose design was guided by considering previous architectures such as SqueezeNet, as well as by simulation results on a neural network accelerator. This new network is able to match AlexNet's accuracy on the ImageNet benchmark with $112\times$ fewer parameters, and one of its deeper variants is able to achieve VGG-19 accuracy with only 4.4 Million parameters, ($31\times$ smaller than VGG-19). SqueezeNext also achieves better top-5 classification accuracy with $1.3\times$ fewer parameters as compared to MobileNet, but avoids using depthwise-separable convolutions that are inefficient on some mobile processor platforms. This wide range of accuracy gives the user the ability to make speed-accuracy tradeoffs, depending on the available resources on the target hardware. Using hardware simulation results for power and inference speed on an embedded system has guided us to design variations of the baseline model that are $2.59\times$/$8.26\times$ faster and $2.25\times$/$7.5\times$ more energy efficient as compared to SqueezeNet/AlexNet without any accuracy degradation.) <|cite_end|> has $112\times$ fewer parameters than AlexNet (Table \ref{tab:Modelattributes}) but higher memory footprint than AlexNet (Table \ref{tab:ResultsSummary}). With larger batch size ($B$), this becomes even worse and for $B=100$, 1.0-G-SqNxt-23 consumes $5.7\times$ higher memory compared to AlexNet. Further, 1.0-G-SqNxt-23 has $3.27\times $ fewer MACs than AlexNet (Table \ref{tab:Modelattributes}) but the energy efficiency of 1.0-G-SqNxt-23 is $45\times$ lower than that of AlexNet (Table \ref{tab:ResultsSummary}). On digging deeper to understand the sources of inefficiency, we found that 1.0-G-SqNxt-23 has $8.7\times$ higher activation and $28.5\times$ lower MACs/activation ratio compared to AlexNet. Lower MACs/activation ratio leads to lower arithmetic intensity and makes the DNN bandwidth bound. This increases the on-chip/off-chip memory access and results into higher energy consumption.
We summarize our contributions as follows.
\begin{itemize}
\item We analyze DNNs which are representative of the state-of-the-art compact DNNs. We perform kernel-level analysis of compact DNNs to get insights into utilization of compute resources by the MACs and the performance bottlenecks of each DNN.
\item We find implications of making DNNs compact on memory footprint, energy efficiency and throughput. We find that memory footprint depends not only on the number of parameters but activations also. In fact, the contribution of activations in memory footprint is very high.
\item Since measuring the arithmetic intensity is relatively difficult, we propose using MACs/parameter ratio and MACs/activation ratio as the proxies for them. Low arithmetic intensity indicates lower degree of reuse of parameters and activations which increases energy required for processing inputs.
\end{itemize}
\begin{table*} [htbp]
\caption{Characteristics of compact DNNs (Params and Acts refer to parameters and activations of DNNs, respectively). }
\label{tab:Modelattributes}
\centering
\begin{tabular}{ |c| c| c| c| c| c| c| c | }
\hline
\textbf{Model Name} & \textbf{Image size} & \textbf{MACs (M)} & \textbf{ $\#$Params (M)} & \textbf{$\#$Acts (M)} & \textbf{Acts/Params} & \textbf{MACs/Params} & \textbf{MACs/Acts} \\
\hline
AlexNet & $224\times 224$ & 723 & 60.97 & 2.05 & 0.03 &11.86 & 352.65 \\
\hline
SqueezeNet-V1.0 & $224\times 224$ & 848 & 1.25 & 12.3 & 9.84 &678.08 & 68.91 \\
SqueezeNet-V1.1 & $224\times 224$ & 349 & 1.24 & 7.2 & 5.81 &281.57 & 48.49 \\
\hline <|paper_end|> | [
"<|reference_start|> Very Deep Convolutional Networks for Large-Scale Image Recognition: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision. <|reference_end|>",
"<|reference_start|> Identity Mappings in Deep Residual Networks: Deep residual networks have emerged as a family of extremely deep architectures showing compelling accuracy and nice convergence behaviors. In this paper, we analyze the propagation formulations behind the residual building blocks, which suggest that the forward and backward signals can be directly propagated from one block to any other block, when using identity mappings as the skip connections and after-addition activation. A series of ablation experiments support the importance of these identity mappings. This motivates us to propose a new residual unit, which makes training easier and improves generalization. We report improved results using a 1001-layer ResNet on CIFAR-10 (4.62% error) and CIFAR-100, and a 200-layer ResNet on ImageNet. Code is available at: https://github.com/KaimingHe/resnet-1k-layers <|reference_end|>",
"<|reference_start|> DESTINY: A Comprehensive Tool with 3D and Multi-Level Cell Memory Modeling Capability: To enable the design of large capacity memory structures, novel memory technologies such as non-volatile memory (NVM) and novel fabrication approaches, e.g., 3D stacking and multi-level cell (MLC) design have been explored. The existing modeling tools, however, cover only few memory technologies, technology nodes and fabrication approaches. We present DESTINY, a tool for modeling 2D/3D memories designed using SRAM, resistive RAM (ReRAM), spin transfer torque RAM (STT-RAM), phase change RAM (PCM) and embedded DRAM (eDRAM) and 2D memories designed using spin orbit torque RAM (SOT-RAM), domain wall memory (DWM) and Flash memory. In addition to single-level cell (SLC) designs for all these memories, DESTINY also supports modeling MLC designs for NVMs. We have extensively validated DESTINY against commercial and research prototypes of these memories. DESTINY is very useful for performing design-space exploration across several dimensions, such as optimizing for a target (e.g. latency, area or energy-delay product) for a given memory technology, choosing the suitable memory technology or fabrication method (i.e. 2D v/s 3D) for a given optimization target, etc. We believe that DESTINY will boost studies of next-generation memory architectures used in systems ranging from mobile devices to extreme-scale supercomputers. <|reference_end|>",
"<|reference_start|> Not All Ops Are Created Equal!: Efficient and compact neural network models are essential for enabling the deployment on mobile and embedded devices. In this work, we point out that typical design metrics for gauging the efficiency of neural network architectures -- total number of operations and parameters -- are not sufficient. These metrics may not accurately correlate with the actual deployment metrics such as energy and memory footprint. We show that throughput and energy varies by up to 5X across different neural network operation types on an off-the-shelf Arm Cortex-M7 microcontroller. Furthermore, we show that the memory required for activation data also need to be considered, apart from the model parameters, for network architecture exploration studies. <|reference_end|>"
] | [
1,
5,
13,
14
] | {"<|multi_cite_1_1|>": "ss-690198", "<|multi_cite_1_2|>": "arxiv-65675", "<|multi_cite_1_3|>": "arxiv-66180", "<|cite_2|>": "ss-690198", "<|multi_cite_3_1|>": "arxiv-65675", "<|multi_cite_3_2|>": "arxiv-94064", "<|multi_cite_4_1|>": "arxiv-110304", "<|multi_cite_4_2|>": "arxiv-88377", "<|multi_cite_4_3|>": "arxiv-92765", "<|cite_5|>": "ss-1680303", "<|cite_6|>": "arxiv-84906", "<|cite_7|>": "arxiv-110216", "<|cite_8|>": "arxiv-154322", "<|cite_9|>": "ss-1466160", "<|cite_10|>": "arxiv-145344", "<|cite_11|>": "arxiv-153115"} |
2403.11032 | <|paper_start|> Title: FH-TabNet: Multi-Class Familial Hypercholesterolemia Detection via a Multi-Stage Tabular Deep Learning
Abstract: FH-TabNet: Multi-Class Familial Hypercholesterolemia Detection via a Multi-Stage Tabular Deep Learning: Familial Hypercholesterolemia (FH) is a genetic disorder characterized by elevated levels of Low-Density Lipoprotein (LDL) cholesterol or its associated genes. Early-stage and accurate categorization of FH is of significance allowing for timely interventions to mitigate the risk of life-threatening conditions. Conventional diagnosis approach, however, is complex, costly, and a challenging interpretation task even for experienced clinicians resulting in high underdiagnosis rates. Although there has been a recent surge of interest in using Machine Learning (ML) models for early FH detection, existing solutions only consider a binary classification task solely using classical ML models. Despite its significance, application of Deep Learning (DL) for FH detection is in its infancy, possibly, due to categorical nature of the underlying clinical data. The paper addresses this gap by introducing the FH-TabNet, which is a multi-stage tabular DL network for multi-class (Definite, Probable, Possible, and Unlikely) FH detection. The FH-TabNet initially involves applying a deep tabular data learning architecture (TabNet) for primary categorization into healthy (Possible/Unlikely) and patient (Probable/Definite) classes. Subsequently, independent TabNet classifiers are applied to each subgroup, enabling refined classification. The model's performance is evaluated through 5-fold cross-validation illustrating superior performance in categorizing FH patients, particularly in the challenging low-prevalence subcategories.
Introduction
\label{sec:Introduction}
Familial Hypercholesterolemia (FH) is one of the most prevalent genetic disorders characterized by abnormally high levels of blood cholesterol <|cite_start|> (Reference: Familial Hypercholesterolemia: Pitfalls and Challenges in Diagnosis and Treatment: Familial hypercholesterolemia (FH), a condition, which is characterized by a life-long exposure to markedly elevated low-density lipoprotein (LDL) concentrations from birth, and it still remains underdiagnosed and undertreated, despite the fact that its heterogeneous form represents one of the commonest genetic disorders to date. Indeed, only 10% of all estimated affected individuals have been diagnosed worldwide and for the most of them diagnosis comes too late, when atherosclerotic cardiovascular disease (ASCVD) has already been developed. Undiagnosed and undertreated FH leads to accelerated ASCVD with a high rate of premature deaths. Recently, several novel treatment modalities have been introduced, especially for the management of severe hypercholesterolemia. Nonetheless, a substantial number of FH patients still do not achieve guideline-recommended LDL cholesterol target values. In the present review we will summarize and critically discuss pitfalls and challenges in successful diagnosis and treatment of FH.) <|cite_end|>. Its prevalence is estimated to range from $1$ in $200$ to $1$ in $300$ individuals across various ethnicities <|cite_start|> (Reference: Familial Hypercholesterolemia Prevalence Among Ethnicities-Systematic Review and Meta-Analysis: Background: Heterozygous familial hypercholesterolemia (FH) is a common genetic disorder leading to premature cardiovascular disease and death as a result of lifelong high plasma low-density lipoprotein cholesterol levels, if not treated early in life. The prevalence of FH varies between countries because of founder effects, use of different diagnostic criteria, and screening strategies. However, little is known about differences in FH prevalence according to ethnicity. We aimed to investigate the ethnic distribution of FH in diverse populations and estimate the prevalence of FH according to ethnicity. Methods: We performed a systematic review and meta-analysis, searching PubMed and Web of Science for studies presenting data on the prevalence of heterozygous FH among different ethnicities in non-founder populations. Studies with more than 100 individuals, relevant data on prevalence, ethnicity, and using the Dutch Lipid Clinical Network Criteria, Simon Broome, Making Early Diagnosis Prevents Early Death, genetic screening, or comparable diagnostic criteria were considered eligible for inclusion. Results: Eleven general population studies and two patient studies were included in a systematic review and 11 general population studies in a random-effects meta-analysis. The overall pooled FH prevalence was 0.33% or 1:303 in 1,169,879 individuals (95% confidence interval: 0.26–0:40%; 1:385–1:250). Included studies presented data on six ethnicities: black, Latino, white, Asian, brown, and mixed/other. Pooled prevalence was estimated for each group. The highest prevalence observed was 0.52% or 1:192 among blacks (0.34–0.69%; 1:294–1:145) and 0.48% or 1:208 among browns (0.31–0.74%; 1:323–1:135) while the lowest pooled prevalence was 0.25% or 1:400 among Asians (0.15–0.35; 1:500–1:286). The prevalence was 0.37% or 1:270 among Latino (0.24–0.69%; 1:417–1:145), 0.31% or 1:323 among white (0.24–0.41%; 1:417–1:244), and 0.32% or 1:313 among mixed/other individuals (0.13–0.52%; 1:769–1:192). Conclusion: The estimated FH prevalence displays a variation across ethnicity, ranging from 0.25% (1:400) to 0.52% (1:192), with the highest prevalence seen among the black and brown and the lowest among the Asian individuals. The differences observed suggest that targeted screening among subpopulations may increase the identification of cases and thus the opportunity for prevention.) <|cite_end|>. The history of FH traces its roots to the pioneering work of the Norwegian physician, Dr. Carl Müller <|cite_start|> (Reference: Xanthomata, Hypercholesterolemia, Angina Pectoris.: ) <|cite_end|>, who shed light on the association between hypercholesterolemia and tendinous xanthomas, connecting them to cardiovascular disease through the lens of single-gene inheritance. FH disorder is commonly caused by mutations in genes responsible for regulating cholesterol metabolism, such as the Low-Density Lipoprotein (LDL) receptor gene that can be passed down through generations in families. This genetic disorder increases the risk of early-onset cardiovascular diseases, including heart attacks and strokes, due to the excessive buildup of LDL cholesterol in the arteries <|cite_start|> (Reference: The genetics and screening of familial hypercholesterolaemia: ) <|cite_end|> <|cite_start|> (Reference: Faculty Opinions recommendation of Familial hypercholesterolaemia is underdiagnosed and undertreated in the general population: guidance for clinicians to prevent coronary heart disease: consensus statement of the European Atherosclerosis Society.: ) <|cite_end|>.
Early detection of FH is, therefore, not only cost-effective but also crucial for preserving lives. However, only $10$\% of the estimated number of worldwide affected individuals have received a formal FH diagnosis. Out of this population, only $2$\% were identified before the age of $18$ years. For the majority of those affected, FH goes unnoticed until middle ages, typically, surfacing around the age of $45$ in tandem with the development of cardiovascular disease, which highlights the urgent need for screening/diagnosis techniques at younger ages <|cite_start|> (Reference: Global perspective of familial hypercholesterolaemia: a cross-sectional study from the EAS Familial Hypercholesterolaemia Studies Collaboration (FHSC): Background The European) <|cite_end|> <|cite_start|> (Reference: Clinical Genetic Testing for Familial Hypercholesterolemia: JACC Scientific Expert Panel.: ) <|cite_end|> <|cite_start|> (Reference: Familial Hypercholesterolemia: a Review of the Natural History, Diagnosis, and Management: ) <|cite_end|> <|cite_start|> (Reference: Mapping of familial hypercholesterolemia and dyslipidemias basic management infrastructure in Pakistan: a cross-sectional study: ) <|cite_end|>. Indeed, without early recognition, many patients receive inadequate treatment and miss valuable opportunities for preventing cardiovascular problems, which can not only impact their quality of life but may also shorten their lifespan.
Despite extensive efforts in the medical community, critical challenges (outlined later in Section~\ref{Sec:RWs}) persist for early detection and timely intervention of FH. Leveraging the power of Electronic Medical Records (EMRs) and Artificial Intelligence (AI), we aim to address this gap. In this context, the paper proposes the $\SM$ framework that provides highly accurate early detection results without relying on genetic data.
\noindent
\textbf{Contributions:}
The paper introduces an innovative framework for diagnosing FH disorder in four distinct stages of progression, referred to as the Multi-Class Familial Hypercholesterolemia Detection via a Multi-Stage Tabular Deep Learning Network ($\SM$). The proposed $\SM$ framework is designed to stage individuals with FH into the following four categories: Definite, Probable, Possible, and Unlikely. A major challenge in this context is the low prevalence of certain sub-categories, which renders the use of a single-stage staging model infeasible. To address this challenge, the $\SM$ adopts a multi-stage approach, utilizing binary classification techniques built at different stages based on a tabular learning architecture known as TabNet <|cite_start|> (Reference: TabNet: Attentive Interpretable Tabular Learning: We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other neural network and decision tree variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into the global model behavior. Finally, for the first time to our knowledge, we demonstrate self-supervised learning for tabular data, significantly improving performance with unsupervised representation learning when unlabeled data is abundant.) <|cite_end|>. In the first stage, $\SM$ differentiates between combined Definite \& Probable category and combined Possible \& Unlikely category. In the second stage, two parallel binary classification models are designed to provide a more refined within-category assessment of the FH risk stage. Sequential attention is utilized for adaptive feature selection at each decision step, which in turn enables the underlying TabNet to more efficiently conduct end-to-end learning. In summary, the paper makes the following key contributions:
\begin{itemize}
\item Introducing the intuitively pleasing $\SM$ architecture, developed based on tabular Deep Neural Networks (DNNs). The $\SM$, to the best of our knowledge, is the first DL-based solution for multi-class FH risk categorization, providing accurate predictions for low-prevalence subcategories.
\item Providing accurate FH staging results without relying on genomic data. Incorporation of EMRs and blood markers instead of genomic data sets $\SM$ apart from its counterparts by making it more cost-effective/accessible in healthcare settings with limited resources.
\end{itemize}
Simulation results demonstrate a significant improvement in the reliability of FH risk prediction when comparing $\SM$ with traditional ML models. The $\SM$ achieved notably higher F1-scores, particularly in predicting the low-prevalence subcategory of FH patients. More specifically, through $5$-fold Cross-Validation (CV), it achieved average F1-scores of $79.20$\% for Definite, $87.20$\% for Probable, $98.60$\% for Possible, and $98.20$\% for Unlikely FH patients.
The rest of the paper is organized as follows: Section~\ref{Sec:RWs}, first, provides an overview of the relevant literature within this field. Afterwards, Section~\ref{sec:MM}, presents the data pre-processing phase and introduces TabNet as the foundational component of the $\SM$ framework. Section~\ref{SM} introduces the $\SM$ architecture. Simulation results are presented in Section~\ref{Sec:4}. Finally, Section~\ref{con} concludes the paper.
Related Work
\label{Sec:RWs}
As stated previously, the term ``hypercholesterolemia" was first introduced in late $1930's$ by Carl Müller <|cite_start|> (Reference: Xanthomata, Hypercholesterolemia, Angina Pectoris.: ) <|cite_end|> conducting a study including $17$ families in which $68$ of $76$ members showed signs of heart disease. He defined hypercholesterolemia patients with tuberous xanthomas and angina signs and concluded that this disorder is hereditary with an autosomal (a specific gene on a numbered chromosome rather than a sex chromosome) dominant characteristic.
When it comes to FH detection, Dutch Lipid Network Criteria (DLNC), Simon Broome Registrar Criteria, and Make Early Diagnosis and Prevent Early Death (MED-PED) criteria <|cite_start|> (Reference: Familial Hypercholesterolaemia Diagnosis and Management: Familial hypercholesterolaemia is the most common monogenic disorder associated with premature coronary artery disease. Mutations are most frequently found in the LDL receptor gene. Clinical criteria can be used to make the diagnosis; however, genetic testing will confirm the disorder and is very useful for cascade screening. Early identification and adequate treatment can improve prognosis, reducing negative clinical cardiovascular outcomes. Patients with familial hypercholesterolaemia are considered at high cardiovascular risk and the treatment target is LDL cholesterol <2.6 mmol/l or at least a 50 % reduction in LDL cholesterol. Patients require intensive treatment with statins and ezetimibe and/or colesevelam. Recently, proprotein convertase subtilisin/kexin type 9 inhibitors have been approved for the management of familial hypercholesterolaemia on top of statins.) <|cite_end|> represent conventional FH screening methods widely employed in clinical settings for diagnosing FH. However, these established models exhibit several drawbacks affecting their practical application. The DLNC and Simon Broome criteria, incorporate lipid levels, physical examinations, family history, and when accessible, genetic data. In contrast, MED-PED criteria prioritize lipid levels and family history. The subjective nature of family history assessments, coupled with scoring variations and diagnostic threshold complexities, may lead to high levels of inconsistency. Moreover, these conventional models also face challenges related to resource accessibility, cost management, and potential population variability, highlighting the need for more effective and accessible diagnostic approaches.
Consequently, there has been a recent surge of interest in applying Machine Learning (ML) techniques for the detection of FH. ML models have gained considerable attention in the field of medical analysis and disease detection. However, almost all of the recent research works <|cite_start|> (Reference: Precision screening for familial hypercholesterolaemia: a machine learning study applied to electronic health encounter data.: ) <|cite_end|> <|cite_start|> (Reference: Performance and clinical utility of supervised machine-learning approaches in detecting familial hypercholesterolaemia in primary care: ) <|cite_end|> <|cite_start|> (Reference: A Machine Learning Model to Aid Detection of Familial Hypercholesterolemia: ) <|cite_end|> <|cite_start|> (Reference: Finding missed cases of familial hypercholesterolemia in health systems using machine learning: ) <|cite_end|> <|cite_start|> (Reference: Prediction of hypercholesterolemia using machine learning techniques: ) <|cite_end|>, mainly focused on binary classification of FH based on classical and hand-crafted ML solutions. In other words, Deep Learning (DL) through the implementation of DNNs has not fully infiltrated this domain, while, traditional ML techniques have been featured in prestigious publications (i.e., Lancet) <|cite_start|> (Reference: Applications of machine learning in familial hypercholesterolemia: Familial hypercholesterolemia (FH) is a common hereditary cholesterol metabolic disease that usually leads to an increase in the level of low-density lipoprotein cholesterol in plasma and an increase in the risk of cardiovascular disease. The lack of disease screening and diagnosis often results in FH patients being unable to receive early intervention and treatment, which may mean early occurrence of cardiovascular disease. Thus, more requirements for FH identification and management have been proposed. Recently, machine learning (ML) has made great progress in the field of medicine, including many innovative applications in cardiovascular medicine. In this review, we discussed how ML can be used for FH screening, diagnosis and risk assessment based on different data sources, such as electronic health records, plasma lipid profiles and corneal radian images. In the future, research aimed at developing ML models with better performance and accuracy will continue to overcome the limitations of ML, provide better prediction, diagnosis and management tools for FH, and ultimately achieve the goal of early diagnosis and treatment of FH.) <|cite_end|>.
For example, Reference <|cite_start|> (Reference: Precision screening for familial hypercholesterolaemia: a machine learning study applied to electronic health encounter data.: ) <|cite_end|> developed the FIND-FH model comprising two sequential layers of Random Forest (RF) models, the first one of which is used for feature selection. Incorporation of two consecutive RF layers enhanced model's performance and adaptability. Reference <|cite_start|> (Reference: Performance and clinical utility of supervised machine-learning approaches in detecting familial hypercholesterolaemia in primary care: ) <|cite_end|> evaluated the effectiveness of various conventional ML techniques in improving the detection of FH and assessed their clinical applicability within a substantial primary care patient population. Similarly, in <|cite_start|> (Reference: A Machine Learning Model to Aid Detection of Familial Hypercholesterolemia: ) <|cite_end|>, a Logistic Regression (LR) model employing the Least Absolute Shrinkage and Selection Operator (LASSO) technique was utilized to discern predictive factors that effectively distinguished individuals with FH. Likewise, Reference <|cite_start|> (Reference: Finding missed cases of familial hypercholesterolemia in health systems using machine learning: ) <|cite_end|> devised a classifier using Electronic Health Record (EHR) data from Stanford Health Care to identify potential FH patients. The classifier, constructed as an RF model, underwent training on data from confirmed patients and carefully matched non-cases. Most other recent works such as <|cite_start|> (Reference: Prediction of hypercholesterolemia using machine learning techniques: ) <|cite_end|>, followed a similar approach and focused on application of different classical ML models (i.e., RFs; Gradient Boosting; Support Vector Machine (SVM), and; LR) for the task of FH detection.
Finally, Reference <|cite_start|> (Reference: Machine Learning Methods for Hypercholesterolemia Long-Term Risk Prediction: Cholesterol is a waxy substance found in blood lipids. Its role in the human body is helpful in the process of producing new cells as long as it is at a healthy level. When cholesterol exceeds the permissible limits, it works the opposite, causing serious heart health problems. When a person has high cholesterol (hypercholesterolemia), the blood vessels are blocked by fats, and thus, circulation through the arteries becomes difficult. The heart does not receive the oxygen it needs, and the risk of heart attack increases. Nowadays, machine learning (ML) has gained special interest from physicians, medical centers and healthcare providers due to its key capabilities in health-related issues, such as risk prediction, prognosis, treatment and management of various conditions. In this article, a supervised ML methodology is outlined whose main objective is to create risk prediction tools with high efficiency for hypercholesterolemia occurrence. Specifically, a data understanding analysis is conducted to explore the features association and importance to hypercholesterolemia. These factors are utilized to train and test several ML models to find the most efficient for our purpose. For the evaluation of the ML models, precision, recall, accuracy, F-measure, and AUC metrics have been taken into consideration. The derived results highlighted Soft Voting with Rotation and Random Forest trees as base models, which achieved better performance in comparison to the other models with an AUC of 94.5%, precision of 92%, recall of 91.8%, F-measure of 91.7% and an accuracy equal to 91.75%.) <|cite_end|> focused on long-term risk prediction of FH by applying supervised ML models aimed at developing highly efficient risk prediction tools for FH occurrence. To identify the most effective solution, different conventional ML models such as Naive Bayes, SVM, Decision Tree, and Ensemble Learning were tested following a comprehensive hand-crafted feature analysis step.
\begin{figure*}[t!]
\centering
\includegraphics[scale = .10]{fh.png}
\caption{\footnotesize $\SM$ architecture with its building blocks. BN, Agg, and FC represent Batch Normalization, Aggregation, and Fully Connected, respectively.}
\label{fig:tabnet}
\end{figure*}
In conclusion, while application of ML has been targeted for FH detection, to the best of our knowledge, the focus of recent research works <|cite_start|> (Reference: Precision screening for familial hypercholesterolaemia: a machine learning study applied to electronic health encounter data.: ) <|cite_end|> <|cite_start|> (Reference: Performance and clinical utility of supervised machine-learning approaches in detecting familial hypercholesterolaemia in primary care: ) <|cite_end|> <|cite_start|> (Reference: A Machine Learning Model to Aid Detection of Familial Hypercholesterolemia: ) <|cite_end|> <|cite_start|> (Reference: Finding missed cases of familial hypercholesterolemia in health systems using machine learning: ) <|cite_end|> <|cite_start|> (Reference: Prediction of hypercholesterolemia using machine learning techniques: ) <|cite_end|> <|cite_start|> (Reference: Machine Learning Methods for Hypercholesterolemia Long-Term Risk Prediction: Cholesterol is a waxy substance found in blood lipids. Its role in the human body is helpful in the process of producing new cells as long as it is at a healthy level. When cholesterol exceeds the permissible limits, it works the opposite, causing serious heart health problems. When a person has high cholesterol (hypercholesterolemia), the blood vessels are blocked by fats, and thus, circulation through the arteries becomes difficult. The heart does not receive the oxygen it needs, and the risk of heart attack increases. Nowadays, machine learning (ML) has gained special interest from physicians, medical centers and healthcare providers due to its key capabilities in health-related issues, such as risk prediction, prognosis, treatment and management of various conditions. In this article, a supervised ML methodology is outlined whose main objective is to create risk prediction tools with high efficiency for hypercholesterolemia occurrence. Specifically, a data understanding analysis is conducted to explore the features association and importance to hypercholesterolemia. These factors are utilized to train and test several ML models to find the most efficient for our purpose. For the evaluation of the ML models, precision, recall, accuracy, F-measure, and AUC metrics have been taken into consideration. The derived results highlighted Soft Voting with Rotation and Random Forest trees as base models, which achieved better performance in comparison to the other models with an AUC of 94.5%, precision of 92%, recall of 91.8%, F-measure of 91.7% and an accuracy equal to 91.75%.) <|cite_end|> was restricted to development of conventional ML models considering a binary classification problem. Application of DL models within this context is in its infancy, possibly, due to the small size of available datasets and the categorical nature of the underlying clinical data. The paper aims to address this gap by targeting design of a domain-specific DL multi-class model. <|paper_end|> | [
"<|reference_start|> Faculty Opinions recommendation of Familial hypercholesterolaemia is underdiagnosed and undertreated in the general population: guidance for clinicians to prevent coronary heart disease: consensus statement of the European Atherosclerosis Society.: <|reference_end|>",
"<|reference_start|> Global perspective of familial hypercholesterolaemia: a cross-sectional study from the EAS Familial Hypercholesterolaemia Studies Collaboration (FHSC): Background The European <|reference_end|>",
"<|reference_start|> Clinical Genetic Testing for Familial Hypercholesterolemia: JACC Scientific Expert Panel.: <|reference_end|>",
"<|reference_start|> A Machine Learning Model to Aid Detection of Familial Hypercholesterolemia: <|reference_end|>"
] | [
4,
5,
6,
26
] | {"<|cite_1|>": "ss-1592470", "<|cite_2|>": "ss-1592471", "<|cite_3|>": "ss-1592472", "<|multi_cite_5_1|>": "ss-1592473", "<|multi_cite_5_2|>": "ss-1592474", "<|multi_cite_6_1|>": "ss-1592475", "<|multi_cite_6_2|>": "ss-1592476", "<|multi_cite_6_3|>": "ss-1592477", "<|multi_cite_6_4|>": "ss-1592478", "<|cite_7|>": "arxiv-219587", "<|cite_8|>": "ss-1592472", "<|cite_9|>": "ss-1592479", "<|multi_cite_10_1|>": "ss-1592480", "<|multi_cite_10_2|>": "ss-1592481", "<|multi_cite_10_3|>": "ss-1592482", "<|multi_cite_10_4|>": "ss-1592483", "<|multi_cite_10_5|>": "ss-1592484", "<|cite_11|>": "ss-1592485", "<|cite_12|>": "ss-1592480", "<|cite_13|>": "ss-1592481", "<|cite_14|>": "ss-1592482", "<|cite_15|>": "ss-1592483", "<|cite_16|>": "ss-1592484", "<|cite_17|>": "ss-1592486", "<|multi_cite_18_1|>": "ss-1592480", "<|multi_cite_18_2|>": "ss-1592481", "<|multi_cite_18_3|>": "ss-1592482", "<|multi_cite_18_4|>": "ss-1592483", "<|multi_cite_18_5|>": "ss-1592484", "<|multi_cite_18_6|>": "ss-1592486"} |
2207.04345 | <|paper_start|> Title: Segmentation of Blood Vessels, Optic Disc Localization, Detection of Exudates and Diabetic Retinopathy Diagnosis from Digital Fundus Images
Abstract: Segmentation of Blood Vessels, Optic Disc Localization, Detection of Exudates and Diabetic Retinopathy Diagnosis from Digital Fundus Images: Diabetic Retinopathy (DR) is a complication of long-standing, unchecked diabetes and one of the leading causes of blindness in the world. This paper focuses on improved and robust methods to extract some of the features of DR, viz. Blood Vessels and Exudates. Blood vessels are segmented using multiple morphological and thresholding operations. For the segmentation of exudates, k-means clustering and contour detection on the original images are used. Extensive noise reduction is performed to remove false positives from the vessel segmentation algorithm's results. The localization of Optic Disc using k-means clustering and template matching is also performed. Lastly, this paper presents a Deep Convolutional Neural Network (DCNN) model with 14 Convolutional Layers and 2 Fully Connected Layers, for the automatic, binary diagnosis of DR. The vessel segmentation, optic disc localization and DCNN achieve accuracies of 95.93%, 98.77% and 75.73% respectively. The source code and pre-trained model are available https://github.com/Sohambasu07/DR_2021
Introduction
\subsection{Diabetic Retinopathy}
Diabetic Retinopathy is a direct consequence of prolonged, unchecked diabetes, wherein the retinal blood vessels get damaged and leak fluid into the retina. If left untreated, DR can eventually lead to total blindness. DR can be classified as: Mild, Moderate, Severe and Proliferative Diabetic Retinopathy (PDR). These stages can be identified by the presence and extent of certain features (Fig. \ref{fig: fig1}).
\setlength{\TPHorizModule}{\paperwidth}\setlength{\TPVertModule}{\paperheight}
\TPMargin{5pt}
\newcommand{\copyrightstatement}{
\begin{textblock}{0.57}(0.22,0.86)
\noindent
\scriptsize This is an Author Accepted Manuscript of the following chapter: Soham Basu, Sayantan Mukherjee, Ankit Bhattacharya, Anindya Sen, Segmentation of Blood Vessels, Optic Disc Localization, Detection of Exudates and Diabetic Retinopathy Diagnosis from Digital Fundus Images, published in Proceedings of Research and Applications in Artificial Intelligence, edited by Indrajit Pan, Anirban Mukherjee, Vincenzo Piuri, 2021, Springer reproduced with permission of Springer Nature Singapore Pte Ltd. 2021.
The final authenticated version is available online at: \href{https://dx.doi.org/10.1007/978-981-16-1543-6\_16}{https: //dx.doi.org/10.1007/978-981-16-1543-6\_16}
Users may only view, print, copy, download and text- and data-mine the content, for the purposes of academic research.
The content may not be (re-)published verbatim in whole or in part or used for commercial purposes. Users must ensure that the author’s moral rights as well as any third parties’ rights to the content or parts of the content are not compromised.
\end{textblock}
}
\copyrightstatement
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{Figure_1.png}
\caption{Different features in a typical DR affected image}
\label{fig: fig1}
\end{figure}
\subsection{Motivation}
Ophthalmologists identify Diabetic Retinopathy based on certain features viz. blood vessel area, soft and hard exudates, hemorrhages, cotton wool spots and microaneurysms. Automatic extraction of these features from fundus images help in the quick and early diagnosis of DR. Proliferative Diabetic Retinopathy is easily identified by studying the abnormal pattern of retinal blood vessels.
\subsection{Proposed Method}
The proposed algorithm utilizes the structure and contrast of the darker blood vessels with respect to the brighter background, and aims to efficiently and accurately segment the vessels from retinal fundus images.
Next, the structural profile of the Optic Disc is used to generate a template and the images are matched with this template to calculate the similarity between the two.
The exudates detection method performs k-means clustering to cluster the different intensities in the original image, and extract the pixels with the highest intensities.
Finally, the proposed DCNN employs 14 convolutional layers to generate feature maps from images and predict the correct labels for the diagnosis of DR.
Related Work
Wang et al. <|cite_start|> (Reference: Hierarchical retinal blood vessel segmentation based on feature and ensemble learning: ) <|cite_end|> demonstrated the use of two classifiers – Convolutional Neural Network (CNN) and Random Forest (RF), which can automatically learn features from raw images and predict patterns, by combining feature learning and traditional learning. Zhang et al. <|cite_start|> (Reference: Retinal vessel segmentation using multi-scale textons derived from keypoints: ) <|cite_end|> proposed an algorithm which classifies vessel pixels using a texton dictionary. It focused more on the thin vessel regions which increased its sensitivity. However, non-vessel pixels may be recognized as vessel pixels, thereby decreasing accuracy and specificity. Singh and Srivastava <|cite_start|> (Reference: Retinal blood vessels segmentation by using Gumbel probability distribution function based matched filter: ) <|cite_end|> used entropy-based optimal thresholding and length filtering, while Al-Diri et al. <|cite_start|> (Reference: An Active Contour Model for Segmenting and Measuring Retinal Vessels: This paper presents an algorithm for segmenting and measuring retinal vessels, by growing a ldquoRibbon of Twinsrdquo active contour model, which uses two pairs of contours to capture each vessel edge, while maintaining width consistency. The algorithm is initialized using a generalized morphological order filter to identify approximate vessels centerlines. Once the vessel segments are identified the network topology is determined using an implicit neural cost function to resolve junction configurations. The algorithm is robust, and can accurately locate vessel edges under difficult conditions, including noisy blurred edges, closely parallel vessels, light reflex phenomena, and very fine vessels. It yields precise vessel width measurements, with subpixel average width errors. We compare the algorithm with several benchmarks from the literature, demonstrating higher segmentation sensitivity and more accurate width measurement.) <|cite_end|> used active contours.
Abbadi et al. <|cite_start|> (Reference: Automatic Detection of Exudates in Retinal Images: Nowadays, automatic detection of different diseases plays an important role in early and reliable diagnosis, which leads to faster recovery and significant reduction in health care costs. One such ...) <|cite_end|> used the grey levels of the Optic Disc (OD) to approximate its boundary. Abdullah and Fraz <|cite_start|> (Reference: Localization and segmentation of optic disc in retinal images using circular Hough transform and grow-cut algorithm: Automated retinal image analysis has been emerging as an important diagnostic tool for early detection of eye-related diseases such as glaucoma and diabetic retinopathy. In this paper, we have presented a robust methodology for optic disc detection and boundary segmentation, which can be seen as the preliminary step in the development of a computer-assisted diagnostic system for glaucoma in retinal images. The proposed method is based on morphological operations, the circular Hough transform and the grow-cut algorithm. The morphological operators are used to enhance the optic disc and remove the retinal vasculature and other pathologies. The optic disc center is approximated using the circular Hough transform, and the grow-cut algorithm is employed to precisely segment the optic disc boundary. The method is quantitatively evaluated on five publicly available retinal image databases DRIVE, DIARETDB1, CHASE_DB1, DRIONS-DB, Messidor and one local Shifa Hospital Database. The method achieves an optic disc detection success rate of 100% for these databases with the exception of 99.09% and 99.25% for the DRIONS-DB, Messidor, and ONHSD databases, respectively. The optic disc boundary detection achieved an average spatial overlap of 78.6%, 85.12%, 83.23%, 85.1%, 87.93%, 80.1%, and 86.1%, respectively, for these databases. This unique method has shown significant improvement over existing methods in terms of detection and boundary extraction of the optic disc.) <|cite_end|> used grow-cut, Mary et al. <|cite_start|> (Reference: An empirical study on optic disc segmentation using an active contour model: ) <|cite_end|> used active contours and Marin et al. <|cite_start|> (Reference: Obtaining optic disc center and pixel region by automatic thresholding methods on morphologically processed fundus images: ) <|cite_end|> used thresholding on morphologically transformed images. Some use the green channel, the red channel or a combination of both. These algorithms fail due to the poor contrast of the OD or saturation due to overexposure in the red channel. Besides, the shapes and sizes of exudates may often be comparable to that of the OD.
Liu et al. <|cite_start|> (Reference: Automatic image analysis of fundus photograph: This paper describes an automatic retinal image analysis system which can be used for the mass screening of diabetic retinopathy patients. Fundus photographs were scanned and processed sequentially to identify the optic disk and the fovea through a Hough transform, to trace the blood vessels by a Gaussian filter and finally to detect the exudates using dynamic thresholding. An objective diagnosis could be provided based on the results of the analysis.) <|cite_end|> used thresholding and region growing to detect exudates. Long et al. <|cite_start|> (Reference: Automatic Detection of Hard Exudates in Color Retinal Images Using Dynamic Threshold and SVM Classification: Algorithm Development and Evaluation: Diabetic retinopathy (DR) is one of the most common causes of visual impairment. Automatic detection of hard exudates (HE) from retinal photographs is an important step for detection of DR. However, most of existing algorithms for HE detection are complex and inefficient. We have developed and evaluated an automatic retinal image processing algorithm for HE detection using dynamic threshold and fuzzy C-means clustering (FCM) followed by support vector machine (SVM) for classification. The proposed algorithm consisted of four main stages: (i) imaging preprocessing; (ii) localization of optic disc (OD); (iii) determination of candidate HE using dynamic threshold in combination with global threshold based on FCM; and (iv) extraction of eight texture features from the candidate HE region, which were then fed into an SVM classifier for automatic HE classification. The proposed algorithm was trained and cross-validated (10 fold) on a publicly available e-ophtha EX database (47 images) on pixel-level, achieving the overall average sensitivity, PPV, and F-score of 76.5%, 82.7%, and 76.7%. It was tested on another independent DIARETDB1 database (89 images) with the overall average sensitivity, specificity, and accuracy of 97.5%, 97.8%, and 97.7%, respectively. In summary, the satisfactory evaluation results on both retinal imaging databases demonstrated the effectiveness of our proposed algorithm for automatic HE detection, by using dynamic threshold and FCM followed by an SVM for classification.) <|cite_end|> used Dynamic Thresholding and SVM Classification, while Ege et al. <|cite_start|> (Reference: Screening for diabetic retinopathy using computer based image analysis and statistical classification: ) <|cite_end|> used Bayesian, Mahalanobis and nearest neighbor classifiers for the same.
Lam et al. <|cite_start|> (Reference: Automated Detection of Diabetic Retinopathy using Deep Learning: Diabetic retinopathy is a leading cause of blindness among working-age adults. Early detection of this condition is critical for good prognosis. In this paper, we demonstrate the use of convolutional neural networks (CNNs) on color fundus images for the recognition task of diabetic retinopathy staging. Our network models achieved test metric performance comparable to baseline literature results, with validation sensitivity of 95%. We additionally explored multinomial classification models, and demonstrate that errors primarily occur in the misclassification of mild disease as normal due to the CNNs inability to detect subtle disease features. We discovered that preprocessing with contrast limited adaptive histogram equalization and ensuring dataset fidelity by expert verification of class labels improves recognition of subtle features. Transfer learning on pretrained GoogLeNet and AlexNet models from ImageNet improved peak test set accuracies to 74.5%, 68.8%, and 57.2% on 2-ary, 3-ary, and 4-ary classification models, respectively.) <|cite_end|> proposed the concept of transfer learning using pre-trained neural networks like GoogLeNet and AlexNet from ImageNet. Pratt et al. <|cite_start|> (Reference: Convolutional Neural Networks for Diabetic Retinopathy: ) <|cite_end|> proposed another CNN model which was trained using Kaggle’s DR database. However, it could only be trained on a high-end GPU to achieve acceptable results. <|paper_end|> | [
"<|reference_start|> Hierarchical retinal blood vessel segmentation based on feature and ensemble learning: <|reference_end|>",
"<|reference_start|> Retinal vessel segmentation using multi-scale textons derived from keypoints: <|reference_end|>",
"<|reference_start|> Obtaining optic disc center and pixel region by automatic thresholding methods on morphologically processed fundus images: <|reference_end|>",
"<|reference_start|> Automated Detection of Diabetic Retinopathy using Deep Learning: Diabetic retinopathy is a leading cause of blindness among working-age adults. Early detection of this condition is critical for good prognosis. In this paper, we demonstrate the use of convolutional neural networks (CNNs) on color fundus images for the recognition task of diabetic retinopathy staging. Our network models achieved test metric performance comparable to baseline literature results, with validation sensitivity of 95%. We additionally explored multinomial classification models, and demonstrate that errors primarily occur in the misclassification of mild disease as normal due to the CNNs inability to detect subtle disease features. We discovered that preprocessing with contrast limited adaptive histogram equalization and ensuring dataset fidelity by expert verification of class labels improves recognition of subtle features. Transfer learning on pretrained GoogLeNet and AlexNet models from ImageNet improved peak test set accuracies to 74.5%, 68.8%, and 57.2% on 2-ary, 3-ary, and 4-ary classification models, respectively. <|reference_end|>"
] | [
0,
1,
7,
11
] | {"<|cite_1|>": "ss-1963927", "<|cite_2|>": "ss-766763", "<|cite_3|>": "ss-1553549", "<|cite_4|>": "ss-2106281", "<|cite_5|>": "ss-766764", "<|cite_6|>": "ss-766765", "<|cite_7|>": "ss-766766", "<|cite_8|>": "ss-766767", "<|cite_9|>": "ss-766768", "<|cite_10|>": "ss-766769", "<|cite_11|>": "ss-766770", "<|cite_12|>": "ss-766771", "<|cite_13|>": "ss-1088321"} |
2105.04760 | <|paper_start|> Title: Unpacking the Expressed Consequences of AI Research in Broader Impact Statements
Abstract: Unpacking the Expressed Consequences of AI Research in Broader Impact Statements: The computer science research community and the broader public have become increasingly aware of negative consequences of algorithmic systems. In response, the top-tier Neural Information Processing Systems (NeurIPS) conference for machine learning and artificial intelligence research required that authors include a statement of broader impact to reflect on potential positive and negative consequences of their work. We present the results of a qualitative thematic analysis of a sample of statements written for the 2020 conference. The themes we identify broadly fall into categories related to how consequences are expressed (e.g., valence, specificity, uncertainty), areas of impacts expressed (e.g., bias, the environment, labor, privacy), and researchers' recommendations for mitigating negative consequences in the future. In light of our results, we offer perspectives on how the broader impact statement can be implemented in future iterations to better align with potential goals.
Introduction
Scientists and the broader public have long grappled with the scientist’s role in considering the societal consequences of their work. According to philosopher of science Heather <|cite_start|> (Reference: Science, Policy, and the Value-Free Ideal: The role of science in policymaking has gained unprecedented stature in the United States, raising questions about the place of science and scientific expertise in the democratic process. Some scientists have been given considerable epistemic authority in shaping policy on issues of great moral and cultural significance, and the politicizing of these issues has become highly contentious. Since World War II, most philosophers of science have purported the concept that science should be 'value-free'. In "Science, Policy and the Value-Free Ideal", Heather E. Douglas argues that such an ideal is neither adequate nor desirable for science. She contends that the moral responsibilities of scientists require the consideration of values even at the heart of science. She lobbies for a new ideal in which values serve an essential function throughout scientific inquiry, but where the role values play is constrained at key points, thus protecting the integrity and objectivity of science. In this vein, Douglas outlines a system for the application of values to guide scientists through points of uncertainty fraught with moral valence. Following a philosophical analysis of the historical background of science advising and the value-free ideal, Douglas defines how values should - and should not - function in science. She discusses the distinctive direct and indirect roles for values in reasoning, and outlines seven senses of objectivity, showing how each can be employed to determine the reliability of scientific claims. Douglas then uses these philosophical insights to clarify the distinction between junk science and sound science to be used in policymaking. In conclusion, she calls for greater openness on the values utilized in policymaking, and more public participation in the policymaking process, by suggesting various models for effective use of both the public and experts in key risk assessments.) <|cite_end|>, scientific thinking since the 1960s has tended to embrace the notion of a value-free ideal, limiting the extent to which scientists engage with non-epistemic social, ethical, or political values in the scientific process. Yet, however well-established, this value-free ideal has failed to address the many ways in which such values invariably infiltrate the scientific enterprise, including how a scientist might make changes to their research agenda based on potential societal consequences.
While the idea of values in design and technology is hardly new~(e.g., <|cite_start|> (Reference: {Do Artifacts Have Politics?: In controversies about technology and society, there is no idea more pro vocative than the notion that technical things have political qualities. At issue is the claim that the machines, structures, and systems of modern material culture can be accurately judged not only for their contributions of efficiency and pro ductivity, not merely for their positive and negative environmental side effects, but also for the ways in which they can embody specific forms of power and authority. Since ideas of this kind have a persistent and troubling presence in discussions about the meaning of technology, they deserve explicit attention.1 Writing in Technology and Culture almost two decades ago, Lewis Mumford gave classic statement to one version of the theme, arguing that "from late neo lithic times in the Near East, right down to our own day, two technologies have recurrently existed side by side: one authoritarian, the other democratic, the first system-centered, immensely powerful, but inherently unstable, the other man-centered, relatively weak, but resourceful and durable."2 This thesis stands at the heart of Mumford's studies of the city, architecture, and the his tory of technics, and mirrors concerns voiced earlier in the works of Peter Kropotkin, William Morris, and other nineteenth century critics of industrial ism. More recently, antinuclear and prosolar energy movements in Europe and America have adopted a similar notion as a centerpiece in their arguments. Thus environmentalist Denis Hayes concludes, "The increased deployment of nuclear power facilities must lead society toward authoritarianism. Indeed, safe reliance upon nuclear power as the principal source of energy may be possible only in a totalitarian state." Echoing the views of many proponents of appropri ate technology and the soft energy path, Hayes contends that "dispersed solar sources are more compatible than centralized technologies with social equity, freedom and cultural pluralism."3 An eagerness to interpret technical artifacts in political language is by no means the exclusive property of critics of large-scale high-technology systems. A long lineage of boosters have insisted that the "biggest and best" that science and industry made available were the best guarantees of democracy, freedom, and social justice. The factory system, automobile, telephone, radio, television, the space program, and of course nuclear power itself have all at one time or another been described as democratizing, liberating forces. David Lilienthal, in T.V.A.: Democracy on the March, for example, found this promise in the phos 121) <|cite_end|> <|cite_start|> (Reference: Value-Sensitive Design: In this section, Journalistica puts a spotlight on research methods used in journalism studies and/or journalism practice.) <|cite_end|>), the broader computer science community has recently begun to make more concerted attempts to challenge the value-free ideal. In particular, computer scientists have called for researchers to consider the downstream consequences of their work as part of the peer-review process <|cite_start|> (Reference: It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process: The computing research community needs to work much harder to address the downsides of our innovations. Between the erosion of privacy, threats to democracy, and automation's effect on employment (among many other issues), we can no longer simply assume that our research will have a net positive impact on the world. While bending the arc of computing innovation towards societal benefit may at first seem intractable, we believe we can achieve substantial progress with a straightforward step: making a small change to the peer review process. As we explain below, we hypothesize that our recommended change will force computing researchers to more deeply consider the negative impacts of their work. We also expect that this change will incentivize research and policy that alleviates computing's negative impacts.) <|cite_end|>, formally integrating the act of reflecting on both positive and negative societal consequences into the scientific enterprise. Suggestions like this one are timely, as the computer science research community---as well as the broader public---is becoming increasingly aware of the ways in which deployed technologies have disproportionate negative impacts on marginalized communities <|cite_start|> (Reference: {Race After Technology: Abolitionist Tools for the New Jim Code: Benjamin argues that automation, far from being a sinister story of racist programmers scheming on the dark web, has the potential to hide, speed up, and deepen discrimination while appearing neutral and even benevolent when compared to the racism of a previous era. Presenting the concept of the “New Jim Code,” she shows how a range of discriminatory designs encode inequity by explicitly amplifying racial hierarchies; by ignoring but thereby replicating social divisions; or by aiming to fix racial bias but ultimately doing quite the opposite. Moreover, she makes a compelling case for race itself as a kind of technology, designed to stratify and sanctify social injustice in the architecture of everyday life.) <|cite_end|> <|cite_start|> (Reference: Algorithms of {Oppression}: {How} {Search} {Engines} {Reinforce} {Racism}: normatively defend its freedom as an institution of democratic self-governance, it needs to show how its separations and dependencies ensure not only individual rights to speak but collective rights to hear” (p. 184). The struggle to both interrogate and defend this kind of press autonomy is especially important in periods of major social and technological change. Considering this, then, Ananny calls for greater collaboration between journalists, technologists, and even the public in order to construct a better networked press. Ananny also envisions a greater prioritization of science and technology studies (STS) in journalism scholarship that centralizes the fundamental interconnectedness between journalism’s human and nonhuman actors. Overall, this book is critical for journalists, scholars, and citizens, who are interested in the larger project of the free press in the United States. Considering Ananny’s concentration on the public in this work, future research may wish to address, more directly, the perspectives of the citizenry. This could be achieved through surveys, interviews, or analyses of texts, such as op-eds, that can provide deeper insight into how members of the public conceive of networked press freedom and their own right to hear. Such a dialogue between journalists, scholars, and the public is increasingly consequential in an era where trust in journalism as a democratic institution is waning. Ultimately, Ananny begins this conversation, foregrounding pivotal questions about the press’s identity in the digital age and offering, as a crucial starting point, new ways to think about press autonomy, and its relationship to democracy.) <|cite_end|> <|cite_start|> (Reference: Automating inequality: How High-tech Tools Profile, Police, and Punish the Poor: Almost two decades into the new millennium, it is unlikely that the use of digital technologies will slow in any significant way, particularly in the public sector. As local and regional public age...) <|cite_end|> and pose significant costs to the environment <|cite_start|> (Reference: Energy and Policy Considerations for Deep Learning in NLP: Recent progress in hardware and methodology for training neural networks has ushered in a new generation of large networks trained on abundant data. These models have obtained notable gains in accuracy across many NLP tasks. However, these accuracy improvements depend on the availability of exceptionally large computational resources that necessitate similarly substantial energy consumption. As a result these models are costly to train and develop, both financially, due to the cost of hardware and electricity or cloud compute time, and environmentally, due to the carbon footprint required to fuel modern tensor processing hardware. In this paper we bring this issue to the attention of NLP researchers by quantifying the approximate financial and environmental costs of training a variety of recently successful neural network models for NLP. Based on these findings, we propose actionable recommendations to reduce costs and improve equity in NLP research and practice.) <|cite_end|> <|cite_start|> (Reference: Green AI: The computations required for deep learning research have been doubling every few months, resulting in an estimated 300,000x increase from 2012 to 2018 [2]. These computations have a surprisingly large carbon footprint [38]. Ironically, deep learning was inspired by the human brain, which is remarkably energy efficient. Moreover, the financial cost of the computations can make it difficult for academics, students, and researchers, in particular those from emerging economies, to engage in deep learning research. This position paper advocates a practical solution by making efficiency an evaluation criterion for research alongside accuracy and related measures. In addition, we propose reporting the financial cost or "price tag" of developing, training, and running models to provide baselines for the investigation of increasingly efficient methods. Our goal is to make AI both greener and more inclusive---enabling any inspired undergraduate with a laptop to write high-quality research papers. Green AI is an emerging focus at the Allen Institute for AI.) <|cite_end|> <|cite_start|> (Reference: On the Dangers of Stochastic Parrots: Can Language Models Be Too
Big?: The past 3 years of work in NLP have been characterized by the development and deployment of ever larger language models, especially for English. BERT, its variants, GPT-2/3, and others, most recently Switch-C, have pushed the boundaries of the possible both through architectural innovations and through sheer size. Using these pretrained models and the methodology of fine-tuning them for specific tasks, researchers have extended the state of the art on a wide array of tasks as measured by leaderboards on specific benchmarks for English. In this paper, we take a step back and ask: How big is too big? What are the possible risks associated with this technology and what paths are available for mitigating those risks? We provide recommendations including weighing the environmental and financial costs first, investing resources into curating and carefully documenting datasets rather than ingesting everything on the web, carrying out pre-development exercises evaluating how the planned approach fits into research and development goals and supports stakeholder values, and encouraging research directions beyond ever larger language models.) <|cite_end|>), and is demanding scrutiny to account for negative consequences <|cite_start|> (Reference: Accountability in Algorithmic Decision Making: Every fiscal quarter automated writing algorithms churn out thousands of corporate earnings articles for the AP (Associated Press) based on little more than structured data. Companies such as Automated Insights, which produces the articles for AP, and Narrative Science can now write straight news articles in almost any domain that has clean and well-structured data: finance, sure, but also sports, weather, and education, among others. The articles aren’t cardboard either; they have variability, tone, and style, and in some cases readers even have difficulty distinguishing the machine-produced articles from human-written ones.) <|cite_end|>.
In 2020, the Conference on Neural Information Processing Systems (NeurIPS), a top-tier conference for machine learning (ML) research, required that authors submit a broader impact statement as part of each paper submission. As per official guidance, the statement was meant to include both positive and negative potential societal consequences. NeurIPS’s broader impact requirement mirrored the call put forth by <|cite_start|> (Reference: It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process: The computing research community needs to work much harder to address the downsides of our innovations. Between the erosion of privacy, threats to democracy, and automation's effect on employment (among many other issues), we can no longer simply assume that our research will have a net positive impact on the world. While bending the arc of computing innovation towards societal benefit may at first seem intractable, we believe we can achieve substantial progress with a straightforward step: making a small change to the peer review process. As we explain below, we hypothesize that our recommended change will force computing researchers to more deeply consider the negative impacts of their work. We also expect that this change will incentivize research and policy that alleviates computing's negative impacts.) <|cite_end|>, and as a result of some ambiguity in its messaging has been termed by and others as an ``experiment,’’ ostensibly in facilitating more active and intentional engagement within computer science around societal consequences of research and technology.
One way to conceive of the act of writing the broader impact statement is as an ``ethical tool,’’ as defined by <|cite_start|> (Reference: The Ethical Tools of Multinationals: ) <|cite_end|>; that is, ``a practical method and/or conceptual framework with the main purpose of helping the user(s) improve their ethical deliberations in order to reach an ethically informed judgement or decision.” Similarly, the broader impact statement is relevant to the ongoing conversation around Algorithmic Impact Assessments (AIAs) <|cite_start|> (Reference: Governing with Algorithmic Impact Assessments: Six Observations: Algorithmic impact assessments (AIA) are increasingly being proposed as a mechanism for algorithmic accountability. These assessments are seen as potentially useful for anticipating, avoiding, and mitigating the negative consequences of algorithmic decision-making systems (ADS). At the same time, what an AIA would entail remains under-specified. While promising, AIAs raise as many questions as they answer. Choices about the methods, scope, and purpose of impact assessments structure the possible governance outcomes. Decisions about what type of effects count as an impact, when impacts are assessed, whose interests are considered, who is invited to participate, who conducts the assessment, the public availability of the assessment, and what the outputs of the assessment might be all shape the forms of accountability that AIA proponents seek to encourage. These considerations remain open, and will determine whether and how AIAs can function as a viable governance mechanism in the broader algorithmic accountability toolkit, especially with regard to furthering the public interest. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.) <|cite_end|>, which refer to methods of increasing accountability around algorithmic systems. More broadly, such impact statements may contribute to frameworks of Responsible Research and Innovation (RRI) <|cite_start|> (Reference: {Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society: Responsible innovation: Managing the responsible emergence of science and innovation in society, by Richard Owen, John Bessant, Maggy Heintz (Eds). (2013). John Wiley & Sons, LTD. Print ISBN. 9781119966364 Developed from the content of a workshop held at the Residence of the French Ambassador in London in May 2011, Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, is a collection of essays by an international cast of academics, administrators, ethicists, and scientists. Accessible for those interested in the trajectory of novelty in technique and technology, the collection is intended for decision makers and policy movers, individuals who need guidance on the unfolding of long term trends rather than specific, near term outcomes. As such, most of the essays will make it clear that they are making pains to stay away from explicit prescriptions for individual problems, and instead setting out to create management frameworks for leadership invested in innovation processes. Responsible Innovation is a textbook meant to be read early in the innovation process, ideally before ethical issues arise. The essays are very well cited and provide a wealth of information for further research. Equipped with the information found within, the text promises the watchdogs of innovative products and processes insight into the question of how innovation can and should be carried out. Responsible Innovation (RI) as a practice gets several definitions over the course of the text, with general conclusions being as follows: that RI is a pluralistic process balancing a continuum of viewpoints, varying education levels, and degrees of political and economic power; that RI has to balance anticipation of the future with the fact that technology is by definition unanticipated; that the current market-based paradigm that dominates the world economic and political systems means that when (and if) RI appears, it arises out of an organized chaos of competition and marketeering; and finally that RI is a collective commitment to the future. The various characterizations of RI proceed from a growing body of scholarship concerned with diligent stewardship of the research process, from academia to business. A collection like Responsible Innovation could easily fall into length philosophical and ethical reflection and polemics, or alternatively, stale repetition of various practical approaches already attempted. It is fortunate, then, that the editors chose such a mix of essays as they did. Philosophical perspectives are erudite, such as the criticism of consequentialism offered in Chapter 7, Understanding the Ethnical Issues (Grinbaum, Groves 2013), and the unpacking of Hannah Arendt's distinction between collective responsibility and collective guilt as an answer to the dominant consequentialist paradigm. The author's referencing of Hans Jonas' idea of technology's influence in "our power over future generations" (127) identifies one of the primary ethical issues at hand in responsible innovation: the depth of the stakes involved in the tireless march forward that constitutes innovation, stakes that have become extraordinarily high. The various global perspectives that inform the collection are helpful in this regard. For instance, a European perspective is offered in Chapter 3's A Vision of Responsible Research and Innovation (Schomberg 2013), which takes on the EU's Lund Declaration--the final word of the Lund Conference in 2009, focused on the "great challenges of our time"--as a starting point for developing 'normative anchor points' for innovative product and process: that product be ethically acceptable, be developed in a sustainable manner, and be socially desirable; and that process be responsive, adaptive, and have integrated management. Relatively new disciplines, such as geo-engineering and nanotechnology, fields of study that promise transformative innovations and coeval ethical implications, are approached with maturity and nuance. …) <|cite_end|> <|cite_start|> (Reference: Responsible research and innovation: From science in society to science for society, with society: The term responsible (research and) innovation has gained increasing EU policy relevance in the last two years, in particular within the European Commission's Science in Society programme, in the context of the Horizon 2020 Strategy. We provide a brief historical overview of the concept, and identify three distinct features that are emerging from associated discourses. The first is an emphasis on the democratic governance of the purposes of research and innovation and their orientation towards the 'right impacts'. The second is responsiveness, emphasising the integration and institutionalisation of established approaches of anticipation, reflection and deliberation in and around research and innovation, influencing the direction of these and associated policy. The third concerns the framing of responsibility itself in the context of research and innovation as collective activities with uncertain and unpredictable consequences. Finally, we reflect on possible motivations for responsible innovation itself. Copyright The Author 2012. Published by Oxford University Press. All rights reserved. For Permissions, please email: [email protected], Oxford University Press.) <|cite_end|> which help govern the R\&D process in ways that are responsive to ethical and societal concerns.
While the intended outcomes of the broader impact statement, as envisioned by the NeurIPS conference organizers, are ambiguous, the <|cite_start|> (Reference: It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process: The computing research community needs to work much harder to address the downsides of our innovations. Between the erosion of privacy, threats to democracy, and automation's effect on employment (among many other issues), we can no longer simply assume that our research will have a net positive impact on the world. While bending the arc of computing innovation towards societal benefit may at first seem intractable, we believe we can achieve substantial progress with a straightforward step: making a small change to the peer review process. As we explain below, we hypothesize that our recommended change will force computing researchers to more deeply consider the negative impacts of their work. We also expect that this change will incentivize research and policy that alleviates computing's negative impacts.) <|cite_end|> proposal suggests goals such as increased transparency of impacts for the community, and encouragement of reflection and research on ways to mitigate negative impacts. In this paper we examine the content of NeurIPS broader impact statements in light of these goals. Do broader impact statements capture a wide array of positive and negative consequences? Is there evidence that authors are considering ways to mitigate negative impacts? We present a qualitative thematic analysis of hundreds of NeurIPS 2020 broader impact statements, characterizing the impacts---both positive and negative---and recommendations for mitigating negative consequences that researchers discussed in their statements. Our underlying goal is to gain insight into what and how researchers elaborated in their broader impact statements, with an eye toward how our results may inform similar future initiatives.
Related Work
The broader impact statements we study operate as a governance tool within the peer review process. Thus, we first briefly situate our work within ideas of responsible research before connecting to literature on values in science and technology.
\subsection{Responsible Research}
RRI can be understood as a process in which stakeholders ``become mutually responsible to each other and anticipate research and innovation outcomes" <|cite_start|> (Reference: {Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society: Responsible innovation: Managing the responsible emergence of science and innovation in society, by Richard Owen, John Bessant, Maggy Heintz (Eds). (2013). John Wiley & Sons, LTD. Print ISBN. 9781119966364 Developed from the content of a workshop held at the Residence of the French Ambassador in London in May 2011, Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society, is a collection of essays by an international cast of academics, administrators, ethicists, and scientists. Accessible for those interested in the trajectory of novelty in technique and technology, the collection is intended for decision makers and policy movers, individuals who need guidance on the unfolding of long term trends rather than specific, near term outcomes. As such, most of the essays will make it clear that they are making pains to stay away from explicit prescriptions for individual problems, and instead setting out to create management frameworks for leadership invested in innovation processes. Responsible Innovation is a textbook meant to be read early in the innovation process, ideally before ethical issues arise. The essays are very well cited and provide a wealth of information for further research. Equipped with the information found within, the text promises the watchdogs of innovative products and processes insight into the question of how innovation can and should be carried out. Responsible Innovation (RI) as a practice gets several definitions over the course of the text, with general conclusions being as follows: that RI is a pluralistic process balancing a continuum of viewpoints, varying education levels, and degrees of political and economic power; that RI has to balance anticipation of the future with the fact that technology is by definition unanticipated; that the current market-based paradigm that dominates the world economic and political systems means that when (and if) RI appears, it arises out of an organized chaos of competition and marketeering; and finally that RI is a collective commitment to the future. The various characterizations of RI proceed from a growing body of scholarship concerned with diligent stewardship of the research process, from academia to business. A collection like Responsible Innovation could easily fall into length philosophical and ethical reflection and polemics, or alternatively, stale repetition of various practical approaches already attempted. It is fortunate, then, that the editors chose such a mix of essays as they did. Philosophical perspectives are erudite, such as the criticism of consequentialism offered in Chapter 7, Understanding the Ethnical Issues (Grinbaum, Groves 2013), and the unpacking of Hannah Arendt's distinction between collective responsibility and collective guilt as an answer to the dominant consequentialist paradigm. The author's referencing of Hans Jonas' idea of technology's influence in "our power over future generations" (127) identifies one of the primary ethical issues at hand in responsible innovation: the depth of the stakes involved in the tireless march forward that constitutes innovation, stakes that have become extraordinarily high. The various global perspectives that inform the collection are helpful in this regard. For instance, a European perspective is offered in Chapter 3's A Vision of Responsible Research and Innovation (Schomberg 2013), which takes on the EU's Lund Declaration--the final word of the Lund Conference in 2009, focused on the "great challenges of our time"--as a starting point for developing 'normative anchor points' for innovative product and process: that product be ethically acceptable, be developed in a sustainable manner, and be socially desirable; and that process be responsive, adaptive, and have integrated management. Relatively new disciplines, such as geo-engineering and nanotechnology, fields of study that promise transformative innovations and coeval ethical implications, are approached with maturity and nuance. …) <|cite_end|>. This in turn relates to models of anticipatory governance in which emerging technologies are steered in order to be adapted to societal needs and ethical considerations <|cite_start|> (Reference: Understanding ‘anticipatory governance’: Anticipatory governance is ‘a broad-based capacity extended through society that can act on a variety of inputs to manage emerging knowledge-based technologies while such management is still possible’. It motivates activities designed to build capacities in foresight, engagement, and integration – as well as through their production ensemble. These capacities encourage and support the reflection of scientists, engineers, policy makers, and other publics on their roles in new technologies. This article reviews the early history of the National Nanotechnology Initiative in the United States, and it further explicates anticipatory governance through exploring the genealogy of the term and addressing a set of critiques found in the literature. These critiques involve skepticism of three proximities of anticipatory governance: to its object, nanotechnology, which is a relatively indistinct one; to the public, which remains almost utterly naïve toward nanotechnology; and to technoscience itself, which allegedly renders anticipatory governance complicit in its hubris. The article concludes that the changing venues and the amplification within them of the still, small voices of folks previously excluded from offering constructive visions of futures afforded by anticipatory governance may not be complete solutions to our woes in governing technology, but they certainly can contribute to bending the long arc of technoscience more toward humane ends.) <|cite_end|>. In general, there are a range of approaches to RRI and anticipatory governance including scenario-based methods which can help systematically explore potential outcomes, and participatory methods which encourage the inclusion of societal stakeholders and feedback into the research process <|cite_start|> (Reference: Ethics of emerging technology: This chapter surveys ethical approaches to emerging technology. In recent years, emerging technologies have become a major topic of study in the ethics of technology, which has increasingly focused its attention on early-stage intervention in technology development. A number of specific approaches and methods have now been developed for the field, which in many ways is still in its infancy. The main problem for the ethics of emerging technology is the problem of uncertainty (Sollie, 2007): how to deal with the uncertainty of future products, uses and consequences, and associated ethical issues that will result from an emerging technology. Several approaches to the ethics of emerging technology will be reviewed that deal with this problem in different ways. Special attention will be paid to anticipatory approaches, which combine foresight analysis with ethical analysis. These approaches will be assessed and critically compared to alternative ethical approaches to emerging technology.) <|cite_end|>. In this work we focus on impact assessments as a mode of anticipatory governance that allows for evaluation and engagement with the ethical implications of a given technology. For example, <|cite_start|> (Reference: A framework for the ethical impact assessment of information technology: ) <|cite_end|> introduces a framework for ethical impact assessments which includes five guiding ethical principles/dimensions (e.g., Respect for Autonomy, Nonmaleficence, etc., which relate to earlier work by <|cite_start|> (Reference: Principles of {{Biomedical Ethics: In this presentation, I will discuss the principles of biomedical and Islamic medical ethics and an interfaith perspective on end-of-life issues. I will also discuss three cases to exemplify some of the conflicts in ethical decision-making.) <|cite_end|>) as well as an accompanying set of ethical tools and procedures to consider a technology as it relates to the guiding dimensions.
AIAs, as previously mentioned, are impact assessments specifically within an algorithmic context. <|cite_start|> (Reference: Governing with Algorithmic Impact Assessments: Six Observations: Algorithmic impact assessments (AIA) are increasingly being proposed as a mechanism for algorithmic accountability. These assessments are seen as potentially useful for anticipating, avoiding, and mitigating the negative consequences of algorithmic decision-making systems (ADS). At the same time, what an AIA would entail remains under-specified. While promising, AIAs raise as many questions as they answer. Choices about the methods, scope, and purpose of impact assessments structure the possible governance outcomes. Decisions about what type of effects count as an impact, when impacts are assessed, whose interests are considered, who is invited to participate, who conducts the assessment, the public availability of the assessment, and what the outputs of the assessment might be all shape the forms of accountability that AIA proponents seek to encourage. These considerations remain open, and will determine whether and how AIAs can function as a viable governance mechanism in the broader algorithmic accountability toolkit, especially with regard to furthering the public interest. Because AlAs are still an incipient governance strategy, approaching them as social constructions that do not require a single or universal approach offers a chance to produce interventions that emerge from careful deliberation.) <|cite_end|> outline several remaining questions about the details of how AIAs work in practice. It is possible to imagine broader impact statements functioning as an ethical tool within a larger AIA framework. However, it is also important to note that the broader impact statement alone, as defined by NeurIPS, does not explicitly require wider engagement with the public or communities that are likely to experience harms that result from a certain technology's deployment or use. <|cite_start|> (Reference: Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts: Algorithmic impact assessments (AIAs) are an emergent form of accountability for organizations that build and deploy automated decision-support systems. They are modeled after impact assessments in other domains. Our study of the history of impact assessments shows that "impacts" are an evaluative construct that enable actors to identify and ameliorate harms experienced because of a policy decision or system. Every domain has different expectations and norms around what constitutes impacts and harms, how potential harms are rendered as impacts of a particular undertaking, who is responsible for conducting such assessments, and who has the authority to act on them to demand changes to that undertaking. By examining proposals for AIAs in relation to other domains, we find that there is a distinct risk of constructing algorithmic impacts as organizationally understandable metrics that are nonetheless inappropriately distant from the harms experienced by people, and which fall short of building the relationships required for effective accountability. As impact assessments become a commonplace process for evaluating harms, the FAccT community, in its efforts to address this challenge, should A) understand impacts as objects that are co-constructed accountability relationships, B) attempt to construct impacts as close as possible to actual harms, and C) recognize that accountability governance requires the input of various types of expertise and affected communities. We conclude with lessons for assembling cross-expertise consensus for the co-construction of impacts and building robust accountability relationships.) <|cite_end|> point out that this could result in abstract discussions of impacts that do not reflect impacts that are likely to be realized, suggesting that the statement may work better as one potential approach among other ethical tools used to surface relevant ethical issues around a given technology.
\subsection{Values in Technology}
In addition to philosophical inquiry into the role of values in science <|cite_start|> (Reference: Science, Policy, and the Value-Free Ideal: The role of science in policymaking has gained unprecedented stature in the United States, raising questions about the place of science and scientific expertise in the democratic process. Some scientists have been given considerable epistemic authority in shaping policy on issues of great moral and cultural significance, and the politicizing of these issues has become highly contentious. Since World War II, most philosophers of science have purported the concept that science should be 'value-free'. In "Science, Policy and the Value-Free Ideal", Heather E. Douglas argues that such an ideal is neither adequate nor desirable for science. She contends that the moral responsibilities of scientists require the consideration of values even at the heart of science. She lobbies for a new ideal in which values serve an essential function throughout scientific inquiry, but where the role values play is constrained at key points, thus protecting the integrity and objectivity of science. In this vein, Douglas outlines a system for the application of values to guide scientists through points of uncertainty fraught with moral valence. Following a philosophical analysis of the historical background of science advising and the value-free ideal, Douglas defines how values should - and should not - function in science. She discusses the distinctive direct and indirect roles for values in reasoning, and outlines seven senses of objectivity, showing how each can be employed to determine the reliability of scientific claims. Douglas then uses these philosophical insights to clarify the distinction between junk science and sound science to be used in policymaking. In conclusion, she calls for greater openness on the values utilized in policymaking, and more public participation in the policymaking process, by suggesting various models for effective use of both the public and experts in key risk assessments.) <|cite_end|> <|cite_start|> (Reference: Matthew J. Brown. Science and Moral Imagination. A New Ideal for Values in Science. Pittsburgh. University of Pittsburgh Press, 2020, 288 pp.: Reseña de Matthew J. Brown. Science and Moral Imagination. A New Ideal for Values in Science. Pittsburgh. University of Pittsburgh Press, 2020, 288 pp.) <|cite_end|>, there is a substantial body of work that lays the foundation for discussing the values embedded in technology. This includes work such as <|cite_start|> (Reference: {Do Artifacts Have Politics?: In controversies about technology and society, there is no idea more pro vocative than the notion that technical things have political qualities. At issue is the claim that the machines, structures, and systems of modern material culture can be accurately judged not only for their contributions of efficiency and pro ductivity, not merely for their positive and negative environmental side effects, but also for the ways in which they can embody specific forms of power and authority. Since ideas of this kind have a persistent and troubling presence in discussions about the meaning of technology, they deserve explicit attention.1 Writing in Technology and Culture almost two decades ago, Lewis Mumford gave classic statement to one version of the theme, arguing that "from late neo lithic times in the Near East, right down to our own day, two technologies have recurrently existed side by side: one authoritarian, the other democratic, the first system-centered, immensely powerful, but inherently unstable, the other man-centered, relatively weak, but resourceful and durable."2 This thesis stands at the heart of Mumford's studies of the city, architecture, and the his tory of technics, and mirrors concerns voiced earlier in the works of Peter Kropotkin, William Morris, and other nineteenth century critics of industrial ism. More recently, antinuclear and prosolar energy movements in Europe and America have adopted a similar notion as a centerpiece in their arguments. Thus environmentalist Denis Hayes concludes, "The increased deployment of nuclear power facilities must lead society toward authoritarianism. Indeed, safe reliance upon nuclear power as the principal source of energy may be possible only in a totalitarian state." Echoing the views of many proponents of appropri ate technology and the soft energy path, Hayes contends that "dispersed solar sources are more compatible than centralized technologies with social equity, freedom and cultural pluralism."3 An eagerness to interpret technical artifacts in political language is by no means the exclusive property of critics of large-scale high-technology systems. A long lineage of boosters have insisted that the "biggest and best" that science and industry made available were the best guarantees of democracy, freedom, and social justice. The factory system, automobile, telephone, radio, television, the space program, and of course nuclear power itself have all at one time or another been described as democratizing, liberating forces. David Lilienthal, in T.V.A.: Democracy on the March, for example, found this promise in the phos 121) <|cite_end|>'s re-envisioning of common artifacts as political ones, work by <|cite_start|> (Reference: Value-Sensitive Design: In this section, Journalistica puts a spotlight on research methods used in journalism studies and/or journalism practice.) <|cite_end|> and others on value-sensitive design, an approach to design that is closely intertwined with a consideration of values that may be built in to a technology, and <|cite_start|> (Reference: The Californian Ideology: A the end of the twentieth century, the long predicted convergence of the media, computing, and telecommunications into hypermedia is finally happening. Once again, capitalism's relentless drive to diversify and intensify the creative powers of human labour is on the verge of qualitatively transforming the way in which we work, play, and live together. By integrating different technologies around common protocols, something is being created which is more than the sum of its parts. When the ability to produce and receive unlimited amounts of information in any form is combined with the reach of the global telephone networks, existing forms of work and leisure can be fundamentally transformed. New industries will be born and current stock market favourites will swept away. At such moments of profound social change, anyone who can offer a simple explanation of what is happening will be listened to with great interest. At this crucial juncture, a loose alliance of writers, hackers, capitalists, and artists from the West Coast of the United States have succeeded in defining a heterogeneous orthodoxy for the coming information age: the Californian Ideology. This new faith has emerged from a bizarre fusion of the cultural bohemianism of San Francisco with the hi-tech industries of) <|cite_end|>'s writing on the combination of values of the New Left and the New Right into a particular kind of ``Californian Ideology'' manifested in Silicon Valley's technological innovations.
There is also a growing body of work that specifically examines values within the computer science community, including how those values are ultimately reflected in new technologies. <|cite_start|> (Reference: The Moral Character of Cryptographic Work.: Cryptography rearranges power: it configures who can do what, from what. This makes cryptography an inherently political tool, and it confers on the field an intrinsically moral dimension. The Snowden revelations motivate a reassessment of the political and moral positioning of cryptography. They lead one to ask if our inability to effectively address mass surveillance constitutes a failure of our field. I believe that it does. I call for a community-wide effort to develop more effective means to resist mass surveillance. I plead for a reinvention of our disciplinary culture to attend not only to puzzles and math, but, also, to the societal implications of our work.) <|cite_end|> describes values that are implicit specifically within cryptographic work, and how these values may have shifted since cryptography's origins. <|cite_start|> (Reference: Against Scale: Provocations and Resistances to Scale Thinking: At the heart of what drives the bulk of innovation and activity in Silicon Valley and elsewhere is scalability. This unwavering commitment to scalability -- to identify strategies for efficient growth -- is at the heart of what we refer to as "scale thinking." Whether people are aware of it or not, scale thinking is all-encompassing. It is not just an attribute of one's product, service, or company, but frames how one thinks about the world (what constitutes it and how it can be observed and measured), its problems (what is a problem worth solving versus not), and the possible technological fixes for those problems. This paper examines different facets of scale thinking and its implication on how we view technology and collaborative work. We argue that technological solutions grounded in scale thinking are unlikely to be as liberatory or effective at deep, systemic change as their purveyors imagine. Rather, solutions which resist scale thinking are necessary to undo the social structures which lie at the heart of social inequality. We draw on recent work on mutual aid networks and propose questions to ask of collaborative work systems as a means to evaluate technological solutions and guide designers in identifying sites of resistance to scale thinking.) <|cite_end|> question the value of scalability in computer science, while provide a study of prominent values found in the ML literature. In terms of ethical deliberation around technology, <|cite_start|> (Reference: Value Cards: An Educational Toolkit for Teaching Social Impacts of Machine Learning through Deliberation: Recently, there have been increasing calls for computer science curricula to complement existing technical training with topics related to Fairness, Accountability, Transparency, and Ethics. In this paper, we present Value Card, an educational toolkit to inform students and practitioners of the social impacts of different machine learning models via deliberation. This paper presents an early use of our approach in a college-level computer science course. Through an in-class activity, we report empirical data for the initial effectiveness of our approach. Our results suggest that the use of the Value Cards toolkit can improve students' understanding of both the technical definitions and trade-offs of performance metrics and apply them in real-world contexts, help them recognize the significance of considering diverse social values in the development of deployment of algorithmic systems, and enable them to communicate, negotiate and synthesize the perspectives of diverse stakeholders. Our study also demonstrates a number of caveats we need to consider when using the different variants of the Value Cards toolkit. Finally, we discuss the challenges as well as future applications of our approach.) <|cite_end|> propose ``Value Cards,'' a toolkit intended to facilitate deliberation around technology and societal values. Within the specific domain of artificial intelligence (AI), recent attention has been given to value alignment challenges and whether values should be embedded or learned from data <|cite_start|> (Reference: Human compatible: artificial intelligence and the problem of control: "The most important book I have read in quite some time" (Daniel Kahneman); "A must-read" (Max Tegmark); "The book we've all been waiting for" (Sam Harris) LONGLISTED FOR THE 2019 FINANCIAL TIMES AND MCKINSEY BUSINESS BOOK OF THE YEAR; A FINANCIAL TIMES BEST BOOK OF THE YEAR 2019 Humans dream of super-intelligent machines. But what happens if we actually succeed? Creating superior intelligence would be the biggest event in human history. Unfortunately, according to the world's pre-eminent AI expert, it could also be the last. In this groundbreaking book on the biggest question facing humanity, Stuart Russell explains why he has come to consider his own discipline an existential threat to our species, and lays out how we can change course before it's too late. There is no one better placed to assess the promise and perils of the dominant technology of the future than Russell, who has spent decades at the forefront of AI research. Through brilliant analogies and crisp, lucid prose, he explains how AI actually works, how it has an enormous capacity to improve our lives - but why we must ensure that we never lose control of machines more powerful than we are. Here Russell shows how we can avert the worst threats by reshaping the foundations of AI to guarantee that machines pursue our objectives, not theirs. Profound, urgent and visionary, Human Compatible is the one book everyone needs to read to understand a future that is coming sooner than we think.) <|cite_end|>. <|paper_end|> | [
"<|reference_start|> Accountability in Algorithmic Decision Making: Every fiscal quarter automated writing algorithms churn out thousands of corporate earnings articles for the AP (Associated Press) based on little more than structured data. Companies such as Automated Insights, which produces the articles for AP, and Narrative Science can now write straight news articles in almost any domain that has clean and well-structured data: finance, sure, but also sports, weather, and education, among others. The articles aren’t cardboard either; they have variability, tone, and style, and in some cases readers even have difficulty distinguishing the machine-produced articles from human-written ones. <|reference_end|>",
"<|reference_start|> It’s Time to Do Something: Mitigating the Negative Impacts of Computing Through a Change to the Peer Review Process: The computing research community needs to work much harder to address the downsides of our innovations. Between the erosion of privacy, threats to democracy, and automation's effect on employment (among many other issues), we can no longer simply assume that our research will have a net positive impact on the world. While bending the arc of computing innovation towards societal benefit may at first seem intractable, we believe we can achieve substantial progress with a straightforward step: making a small change to the peer review process. As we explain below, we hypothesize that our recommended change will force computing researchers to more deeply consider the negative impacts of their work. We also expect that this change will incentivize research and policy that alleviates computing's negative impacts. <|reference_end|>",
"<|reference_start|> Principles of {{Biomedical Ethics: In this presentation, I will discuss the principles of biomedical and Islamic medical ethics and an interfaith perspective on end-of-life issues. I will also discuss three cases to exemplify some of the conflicts in ethical decision-making. <|reference_end|>",
"<|reference_start|> Science, Policy, and the Value-Free Ideal: The role of science in policymaking has gained unprecedented stature in the United States, raising questions about the place of science and scientific expertise in the democratic process. Some scientists have been given considerable epistemic authority in shaping policy on issues of great moral and cultural significance, and the politicizing of these issues has become highly contentious. Since World War II, most philosophers of science have purported the concept that science should be 'value-free'. In \"Science, Policy and the Value-Free Ideal\", Heather E. Douglas argues that such an ideal is neither adequate nor desirable for science. She contends that the moral responsibilities of scientists require the consideration of values even at the heart of science. She lobbies for a new ideal in which values serve an essential function throughout scientific inquiry, but where the role values play is constrained at key points, thus protecting the integrity and objectivity of science. In this vein, Douglas outlines a system for the application of values to guide scientists through points of uncertainty fraught with moral valence. Following a philosophical analysis of the historical background of science advising and the value-free ideal, Douglas defines how values should - and should not - function in science. She discusses the distinctive direct and indirect roles for values in reasoning, and outlines seven senses of objectivity, showing how each can be employed to determine the reliability of scientific claims. Douglas then uses these philosophical insights to clarify the distinction between junk science and sound science to be used in policymaking. In conclusion, she calls for greater openness on the values utilized in policymaking, and more public participation in the policymaking process, by suggesting various models for effective use of both the public and experts in key risk assessments. <|reference_end|>"
] | [
10,
11,
21,
24
] | {"<|cite_13|>": "ss-1401738", "<|multi_cite_1_1|>": "ss-742121", "<|multi_cite_1_2|>": "ss-1603279", "<|cite_2|>": "ss-1401735", "<|multi_cite_3_1|>": "ss-785753", "<|multi_cite_3_2|>": "ss-1053157", "<|multi_cite_3_3|>": "ss-861759", "<|multi_cite_4_1|>": "arxiv-208207", "<|multi_cite_4_2|>": "arxiv-215931", "<|multi_cite_4_3|>": "ss-771959", "<|cite_5|>": "ss-765438", "<|cite_14|>": "ss-1401735", "<|cite_16|>": "ss-1401740", "<|cite_6|>": "ss-1401736", "<|multi_cite_7_1|>": "ss-1401737", "<|multi_cite_7_2|>": "ss-1360286", "<|cite_17|>": "ss-1401735", "<|cite_8|>": "ss-1401737", "<|cite_9|>": "ss-2315654", "<|cite_10|>": "ss-772603", "<|cite_18|>": "ss-2316288", "<|cite_19|>": "ss-1056801", "<|cite_20|>": "ss-1401736", "<|cite_21|>": "ss-1401741", "<|multi_cite_11_1|>": "ss-1401738", "<|multi_cite_11_2|>": "ss-1401739", "<|cite_22|>": "ss-742121", "<|cite_23|>": "ss-1603279", "<|cite_24|>": "ss-1401742", "<|cite_25|>": "ss-1401743", "<|cite_26|>": "arxiv-297021", "<|cite_28|>": "arxiv-298161", "<|cite_12|>": "ss-724302"} |
2106.12453 | <|paper_start|> Title: Extended formulations for matroid polytopes through randomized protocols
Abstract: Extended formulations for matroid polytopes through randomized protocols: Let $P$ be a polytope. The hitting number of $P$ is the smallest size of a hitting set of the facets of $P$, i.e., a subset of vertices of $P$ such that every facet of $P$ has a vertex in the subset. An extended formulation of $P$ is the description of a polyhedron that linearly projects to $P$. We show that, if $P$ is the base polytope of any matroid, then $P$ admits an extended formulation whose size depends linearly on the hitting number of $P$. Our extended formulations generalize those of the spanning tree polytope given by Martin and Wong. Our proof is simple and short, and it goes through the deep connection between extended formulations and communication protocols.
Introduction
Describing combinatorial problems via geometric objects is a major theme in combinatorial optimization.
For instance, spanning trees in a graph can be described by the spanning tree polytope, which has a well known description <|cite_start|> (Reference: Matroids and the greedy algorithm: ) <|cite_end|>. However, such polytope has an exponential number of facets, hence its description is too large to be used in practice. In such cases, one can try to add extra variables to the ``natural'' polytope and find an alternative description in an extended space.
An \emph{extended formulation} of polytope $P\subset \R^d$ is a polyhedron $Q\subset \R^{d+d'}$ that linearly projects to $P$. The \emph{size} of such formulation is its number of inequalities (i.e., the number of facets of $Q$), and the \emph{extension complexity} of $P$, denoted by $\xc(P)$, is the minimum size of an extended formulation of $P$.
The systematic study of extended formulations and extension complexity began with Yannakakis <|cite_start|> (Reference: Expressing Combinatorial Optimization Problems by Linear
Programs: Many combinatorial optimization problems call for the optimization of a linear function over a certain polytope. Typically, these polytopes have an exponential number of facets. We explore the problem of finding small linear programming formulations when one may use any new variables and constraints. We show that expressing the matching and the Traveling Salesman Problem by a symmetric linear program requires exponential size. We relate the minimum size needed by a LP to express a polytope to a combinatorial parameter, point out some connections with communication complexity theory, and examine the vertex packing polytope for some classes of graphs.) <|cite_end|> and produced a number of impressive results that shed light on the power and on the limits of linear programming <|cite_start|> (Reference: Exponential Lower Bounds for Polytopes in Combinatorial Optimization: We solve a 20-year old problem posed by Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the cut polytope and the stable set polytope. These results were discovered through a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs.) <|cite_end|> <|cite_start|> (Reference: The matching polytope has exponential extension complexity: A popular method in combinatorial optimization is to express polytopes P, which may potentially have exponentially many facets, as solutions of linear programs that use few extra variables to reduce the number of constraints down to a polynomial. After two decades of standstill, recent years have brought amazing progress in showing lower bounds for the so called extension complexity, which for a polytope P denotes the smallest number of inequalities necessary to describe a higher dimensional polytope Q that can be linearly projected on P. However, the central question in this field remained wide open: can the perfect matching polytope be written as an LP with polynomially many constraints? We answer this question negatively. In fact, the extension complexity of the perfect matching polytope in a complete n-node graph is 2^Omega(n). By a known reduction this also improves the lower bound on the extension complexity for the TSP polytope from 2^Omega(n^1/2) to 2^Omega(n).) <|cite_end|>. We refer to <|cite_start|> (Reference: Extended Formulations in Combinatorial Optimization: The concept of representing a polytope that is associated with some combinatorial optimization problem as a linear projection of a higher-dimensional polyhedron has recently received increasing attention. In this paper (written for the newsletter Optima of the Mathematical Optimization Society), we provide a brief introduction to this topic and sketch some of the recent developments with respect to both tools for constructing such extended formulations as well as lower bounds on their sizes.) <|cite_end|> for a survey on the topic.
Let $G$ be an $n$-vertex graph. While the description of the spanning tree polytope of $G$ has $\Omega(2^n)$ inequalities, extended formulations due to Wong <|cite_start|> (Reference: A Study of Integer Programming Formulations on Traveling Salesman Problem with Flexible Coloring: Models and Application in Home Healthcare Service: ) <|cite_end|> and Martin <|cite_start|> (Reference: Using separation algorithms to generate mixed integer model reformulations: ) <|cite_end|> have size $O(n^3)$. Since a cubic number of variables and constraints is still large for practical purposes, a famous open question is whether it is possible to find even smaller extended formulations, see <|cite_start|> (Reference: On the Combinatorial Lower Bound for the Extension Complexity of the Spanning Tree Polytope: In the study of extensions of polytopes of combinatorial optimization problems, a notorious open question is that for the size of the smallest extended formulation of the Minimum Spanning Tree problem on a complete graph with $n$ nodes. The best known lower bound is the trival (dimension) bound, $\Omega(n^2)$, the best known upper bound is the extended formulation by Wong (1980) of size $O(n^3)$ (also Martin, 1991). In this note we give a nondeterministic communication protocol with cost $\log_2(n^2\log n)+O(1)$ for the support of the spanning tree slack matrix. This means that the combinatorial lower bounds can improve the trivial lower bound only by a factor of (at most) $O(\log n)$.) <|cite_end|> <|cite_start|> (Reference: Smaller Extended Formulations for the Spanning Tree Polytope of Bounded-genus Graphs: We give an $O(g^{1/2} n^{3/2} + g^{3/2} n^{1/2})$-size extended formulation for the spanning tree polytope of an $n$-vertex graph embedded on a surface of genus $g$, improving on the known $O(n^2 + g n)$-size extended formulations following from Wong and Martin.) <|cite_end|> <|cite_start|> (Reference: Smaller extended formulations for spanning tree polytopes in minor-closed classes and beyond: Let $G$ be a connected $n$-vertex graph in a proper minor-closed class $\mathcal G$. We prove that the extension complexity of the spanning tree polytope of $G$ is $O(n^{3/2})$. This improves on the $O(n^2)$ bounds following from the work of Wong (1980) and Martin (1991). It also extends a result of Fiorini, Huynh, Joret, and Pashkovich (2017), who obtained a $O(n^{3/2})$ bound for graphs embedded in a fixed surface. Our proof works more generally for all graph classes admitting strongly sublinear balanced separators: We prove that for every constant $\beta$ with $0<\beta<1$, if $\mathcal G$ is a graph class closed under induced subgraphs such that all $n$-vertex graphs in $\mathcal G$ have balanced separators of size $O(n^\beta)$, then the extension complexity of the spanning tree polytope of every connected $n$-vertex graph in $\mathcal{G}$ is $O(n^{1+\beta})$. We in fact give two proofs of this result, one is a direct construction of the extended formulation, the other is via communication protocols. Using the latter approach we also give a short proof of the $O(n)$ bound for planar graphs due to Williams (2002).) <|cite_end|>.
Extension complexity is deeply related with the field of communication complexity <|cite_start|> (Reference: Communication Complexity: The first section starts with the basic definitions following mainly the notations of the book written by E. Kushilevitz and N. Nisan. At the end of the first section I examine tree-balancing. In the second section I summarize the well-known lower bound methods and prove the exact complexity of certain functions. In the first part of the third section I introduce the random complexity and prove the basic lemmas about it. In the second part I prove a better lower bound for the complexity of all random functions. In the third part I introduce and compare several upper bounds for the complexity of the identity function. In the fourth section I examine the well-known Direct-sum conjecture. I introduce a different model of computation then prove that it is the same as the original one up to a constant factor. This new model is used to bound the Amortized Time Complexity of a function by the number of the leaves of its protocol-tree. After this I examine the Direct-sum problem in case of Partial Information and in the Random case. In the last section I introduce the well-known hierarchy classes, the reducibility and the completeness of series of functions. Then I define the class PSPACE and Oracles in the communication complexity model and prove some basic claims about them.) <|cite_end|>, which inspired the most celebrated results in the field <|cite_start|> (Reference: Exponential Lower Bounds for Polytopes in Combinatorial Optimization: We solve a 20-year old problem posed by Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the cut polytope and the stable set polytope. These results were discovered through a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs.) <|cite_end|> <|cite_start|> (Reference: The matching polytope has exponential extension complexity: A popular method in combinatorial optimization is to express polytopes P, which may potentially have exponentially many facets, as solutions of linear programs that use few extra variables to reduce the number of constraints down to a polynomial. After two decades of standstill, recent years have brought amazing progress in showing lower bounds for the so called extension complexity, which for a polytope P denotes the smallest number of inequalities necessary to describe a higher dimensional polytope Q that can be linearly projected on P. However, the central question in this field remained wide open: can the perfect matching polytope be written as an LP with polynomially many constraints? We answer this question negatively. In fact, the extension complexity of the perfect matching polytope in a complete n-node graph is 2^Omega(n). By a known reduction this also improves the lower bound on the extension complexity for the TSP polytope from 2^Omega(n^1/2) to 2^Omega(n).) <|cite_end|>. This connection was hinted in <|cite_start|> (Reference: Expressing Combinatorial Optimization Problems by Linear
Programs: Many combinatorial optimization problems call for the optimization of a linear function over a certain polytope. Typically, these polytopes have an exponential number of facets. We explore the problem of finding small linear programming formulations when one may use any new variables and constraints. We show that expressing the matching and the Traveling Salesman Problem by a symmetric linear program requires exponential size. We relate the minimum size needed by a LP to express a polytope to a combinatorial parameter, point out some connections with communication complexity theory, and examine the vertex packing polytope for some classes of graphs.) <|cite_end|> and then established in <|cite_start|> (Reference: Extended formulations, nonnegative factorizations, and randomized communication protocols: ) <|cite_end|>, where the extension complexity of a polytope $P$ is expressed as the complexity of a randomized protocol solving a certain game on vertices and facets of $P$ (see Section \ref{sec:prot} for details). In particular, <|cite_start|> (Reference: Extended formulations, nonnegative factorizations, and randomized communication protocols: ) <|cite_end|> gives a nice, simple protocol for the spanning tree polytope matching the $O(n^3)$ extended formulation from <|cite_start|> (Reference: Using separation algorithms to generate mixed integer model reformulations: ) <|cite_end|>.
Matroids are among the most mysterious objects from the point of view of extension complexity.
The base polytope $B(M)$ of a matroid $M$ is the convex hull of incidence vectors of bases of $M$.
Bases of general matroids generalize the spanning trees of a graph, hence $B(M)$ is a natural generalization of the spanning tree polytope. While the optimization problem for matroids is polynomial-time solvable in the oracle model, it is known <|cite_start|> (Reference: Some 0/1 polytopes need exponential size extended formulations: We prove that there are 0/1 polytopes P that do not admit a compact LP formulation. More precisely we show that for every n there is a sets X \subseteq {0,1}^n such that conv(X) must have extension complexity at least 2^{n/2 * (1-o(1))}. In other words, every polyhedron Q that can be linearly projected on conv(X) must have exponentially many facets. In fact, the same result also applies if conv(X) is restricted to be a matroid polytope. Conditioning on NP not contained in P_{/poly}, our result rules out the existence of any compact formulation for the TSP polytope, even if the formulation may contain arbitrary real numbers.) <|cite_end|> that there are matroids $M$ whose extension complexity (by which we mean $\xc(B(M))$) is exponential. However, finding an explicit class of such matroids is a notorious open problem, deeply related to the field of circuit complexity <|cite_start|> (Reference: Extended formulations for matroid polytopes through randomized protocols: ) <|cite_end|>.
On the other hand, a number of special classes of matroids have been found to have polynomial extension complexity: graphic and cographic matroids (thanks to the aforementioned formulations of the spanning tree polytope), sparsity matroids <|cite_start|> (Reference: Extended Formulations for Sparsity Matroids: We show the existence of a polynomial-size extended formulation for the base polytope of a $(k,\ell)$-sparsity matroid. For an undirected graph $G=(V,E)$, the size of the formulation is $O(|V||E|)$ when $k \geq \ell$ and $O(|V|^2 |E|)$ when $k \leq \ell$. To this end, we employ the technique developed by Faenza et al. recently that uses a randomized communication protocol.) <|cite_end|>, count matroids <|cite_start|> (Reference: Subgraph Polytopes and Independence Polytopes of Count Matroids: Given an undirected graph, the non-empty subgraph polytope is the convex hull of the characteristic vectors of pairs (F, S) where S is a non-empty subset of nodes and F is a subset of the edges with both endnodes in S. We obtain a strong relationship between the non-empty subgraph polytope and the spanning forest polytope. We further show that these polytopes provide polynomial size extended formulations for independence polytopes of count matroids, which generalizes recent results obtained by Iwata et al. referring to sparsity matroids. As a byproduct, we obtain new lower bounds on the extension complexity of the spanning forest polytope in terms of extension complexities of independence polytopes of these matroids.) <|cite_end|>, regular matroids <|cite_start|> (Reference: Regular matroids have polynomial extension complexity: We prove that the extension complexity of the independence polytope of every regular matroid on $n$ elements is $O(n^6)$. Past results of Wong and Martin on extended formulations of the spanning tree polytope of a graph imply a $O(n^2)$ bound for the special case of (co)graphic matroids. However, the case of a general regular matroid was open, despite recent attempts. We also consider the extension complexity of circuit dominants of regular matroids, for which we give a $O(n^2)$ bound.) <|cite_end|>.
In particular, regular matroids are those matroids that can be represented by totally unimodular matrices. This class strictly contains the classes of graphic and cographic matroids and is strictly contained in the class of binary matroids, that can be represented by matrices over the two-elements field GF$(2)$.
Proving polynomial upper bounds on the extension complexity of binary matroids, or showing a super-polynomial lower bound on it, is a major open question. All the extended formulations proposed so far for special classes of matroids are deeply based on connections with graphs and their structure.
In this paper, we show a general method to derive extended formulations for the base polytope of matroids. This is done through the aforementioned connection between randomized protocols and extended formulations <|cite_start|> (Reference: Extended formulations, nonnegative factorizations, and randomized communication protocols: ) <|cite_end|>: in particular we extend the protocol in <|cite_start|> (Reference: Extended formulations, nonnegative factorizations, and randomized communication protocols: ) <|cite_end|> for the spanning tree polytope to all matroids, obtaining an extended formulation for $B(M)$ whose size depends on a certain parameter that we introduce.
This parameter can be defined for any polytope as follows: given a polytope $P$, the \emph{hitting number} $h(P)$ of $P$ is the smallest size of a set $\calV$ of vertices of $P$ such that each facet of $P$ contains at least a vertex in $\calV$.
Our result (Theorem \ref{thm:main}) is that all matroids $M$ on $n$ elements such that $h(B(M))$ is polynomial in $n$ have polynomial extension complexity. The proof of Theorem \ref{thm:main} makes use of the power of randomized protocols (Theorem \ref{thm:random}) and of the bijective basis exchange axiom (Lemma \ref{lem:bijection}); apart from those tools, it is simple and very short.
The hitting set number of a polytope is a natural combinatorial parameter, which, to the best of our knowledge, has not been previously studied (in particular not in relation to extension complexity).
It is possible that the connection between these two parameters extends to more general polytopes: in fact, we do not know a polytope with exponential extension complexity and polynomial hitting number.
The paper is structured as follows: in Section \ref{sec:prelim} we give some preliminaries and recall the connection between extension complexity and randomized protocols from <|cite_start|> (Reference: Extended formulations, nonnegative factorizations, and randomized communication protocols: ) <|cite_end|>; in Section \ref{sec:proof} we prove our main result; in Section \ref{sec:graphic} we apply our result to the special case of graphic and cographic matroids, and re-derive known bounds on their extension complexity; finally, in Section \ref{sec:conclusion} we suggest possible applications of our result and mention some open questions. <|paper_end|> | [
"<|reference_start|> Smaller extended formulations for spanning tree polytopes in minor-closed classes and beyond: Let $G$ be a connected $n$-vertex graph in a proper minor-closed class $\\mathcal G$. We prove that the extension complexity of the spanning tree polytope of $G$ is $O(n^{3/2})$. This improves on the $O(n^2)$ bounds following from the work of Wong (1980) and Martin (1991). It also extends a result of Fiorini, Huynh, Joret, and Pashkovich (2017), who obtained a $O(n^{3/2})$ bound for graphs embedded in a fixed surface. Our proof works more generally for all graph classes admitting strongly sublinear balanced separators: We prove that for every constant $\\beta$ with $0<\\beta<1$, if $\\mathcal G$ is a graph class closed under induced subgraphs such that all $n$-vertex graphs in $\\mathcal G$ have balanced separators of size $O(n^\\beta)$, then the extension complexity of the spanning tree polytope of every connected $n$-vertex graph in $\\mathcal{G}$ is $O(n^{1+\\beta})$. We in fact give two proofs of this result, one is a direct construction of the extended formulation, the other is via communication protocols. Using the latter approach we also give a short proof of the $O(n)$ bound for planar graphs due to Williams (2002). <|reference_end|>",
"<|reference_start|> Exponential Lower Bounds for Polytopes in Combinatorial Optimization: We solve a 20-year old problem posed by Yannakakis and prove that there exists no polynomial-size linear program (LP) whose associated polytope projects to the traveling salesman polytope, even if the LP is not required to be symmetric. Moreover, we prove that this holds also for the cut polytope and the stable set polytope. These results were discovered through a new connection that we make between one-way quantum communication protocols and semidefinite programming reformulations of LPs. <|reference_end|>",
"<|reference_start|> The matching polytope has exponential extension complexity: A popular method in combinatorial optimization is to express polytopes P, which may potentially have exponentially many facets, as solutions of linear programs that use few extra variables to reduce the number of constraints down to a polynomial. After two decades of standstill, recent years have brought amazing progress in showing lower bounds for the so called extension complexity, which for a polytope P denotes the smallest number of inequalities necessary to describe a higher dimensional polytope Q that can be linearly projected on P. However, the central question in this field remained wide open: can the perfect matching polytope be written as an LP with polynomially many constraints? We answer this question negatively. In fact, the extension complexity of the perfect matching polytope in a complete n-node graph is 2^Omega(n). By a known reduction this also improves the lower bound on the extension complexity for the TSP polytope from 2^Omega(n^1/2) to 2^Omega(n). <|reference_end|>",
"<|reference_start|> Extended formulations, nonnegative factorizations, and randomized communication protocols: <|reference_end|>"
] | [
9,
11,
12,
22
] | {"<|cite_1|>": "ss-2311547", "<|cite_2|>": "ss-704398", "<|multi_cite_3_1|>": "arxiv-25947", "<|multi_cite_3_2|>": "arxiv-52536", "<|cite_4|>": "ss-1006597", "<|cite_5|>": "ss-998981", "<|cite_6|>": "ss-771128", "<|multi_cite_7_1|>": "arxiv-115890", "<|multi_cite_7_2|>": "arxiv-96793", "<|multi_cite_7_3|>": "arxiv-350278", "<|cite_8|>": "arxiv-14870", "<|multi_cite_9_1|>": "arxiv-25947", "<|multi_cite_9_2|>": "arxiv-52536", "<|cite_10|>": "ss-704398", "<|cite_11|>": "ss-2537740", "<|cite_12|>": "ss-2537740", "<|cite_13|>": "ss-771128", "<|cite_14|>": "arxiv-21112", "<|cite_15|>": "ss-2393496", "<|cite_16|>": "arxiv-58619", "<|cite_17|>": "arxiv-72824", "<|cite_18|>": "arxiv-224443", "<|cite_19|>": "ss-2537740", "<|cite_20|>": "ss-2537740", "<|cite_21|>": "ss-2537740"} |
2012.11129 | <|paper_start|> Title: A Semi-Lagrangian Computation of Front Speeds of G-equation in ABC and Kolmogorov Flows with Estimation via Ballistic Orbits
Abstract: A Semi-Lagrangian Computation of Front Speeds of G-equation in ABC and Kolmogorov Flows with Estimation via Ballistic Orbits: The Arnold-Beltrami-Childress (ABC) flow and the Kolmogorov flow are three dimensional periodic divergence free velocity fields that exhibit chaotic streamlines. We are interested in front speed enhancement in G-equation of turbulent combustion by large intensity ABC and Kolmogorov flows. We give a quantitative construction of the ballistic orbits of ABC and Kolmogorov flows, namely those with maximal large time asymptotic speeds in a coordinate direction. Thanks to the optimal control theory of G-equation (a convex but non-coercive Hamilton-Jacobi equation), the ballistic orbits serve as admissible trajectories for front speed estimates. To study the tightness of the estimates, we compute the front speeds of G-equation based on a semi-Lagrangian (SL) scheme with Strang splitting and weighted essentially non-oscillatory (WENO) interpolation. Time step size is chosen so that the Courant number grows sublinearly with the flow intensity. Numerical results show that the front speed growth rate in terms of the flow intensity may approach the analytical bounds from the ballistic orbits.
Introduction
The study of transport phenomena in three dimensional fluid flows is a challenging problem, due in part to the presence of chaos and the high computational costs in resolving small scales, <|cite_start|> (Reference: Turbulent combustion modeling: ) <|cite_end|> <|cite_start|> (Reference: SIMPLIFIED MODELS FOR TURBULENT DIFFUSION : THEORY, NUMERICAL MODELLING, AND PHYSICAL PHENOMENA: ) <|cite_end|> <|cite_start|> (Reference: Turbulent combustion modeling: ) <|cite_end|> <|cite_start|> (Reference: Front Propagation in Heterogeneous Media: A review is presented of recent results on front propagation in reaction-diffusion-advection equations in homogeneous and heterogeneous media. Formal asymptotic expansions and heuristic ideas are used to motivate the results wherever possible. The fronts include constant-speed monotone traveling fronts in homogeneous media, periodically varying traveling fronts in periodic media, and fluctuating and fractal fronts in random media. These fronts arise in a wide range of applications such as chemical kinetics, combustion, biology, transport in porous media, and industrial deposition processes. Open problems are briefly discussed along the way.) <|cite_end|> <|cite_start|> (Reference: An Introduction to Fronts in Random Media: ) <|cite_end|> and references therein.
In this paper, we consider the Arnold-Beltrami-Childress (ABC) flow <|cite_start|> (Reference: Sur la topologie des écoulements stationnaires des fluides parfaits: ) <|cite_end|> <|cite_start|> (Reference: Chaotic Streamlines in the ABC Flows: The particle paths of the Arnold-Beltrami-Childress (ABC) flows \[ u = (A \sin z+ C \cos y, B \sin x + A \cos z, C \sin y + B \cos x). \] are investigated both analytically and numerically. This three-parameter family of spatially periodic flows provides a simple steady-state solution of Euler's equations. Nevertheless, the streamlines have a complicated Lagrangian structure which is studied here with dynamical systems tools. In general, there is a set of closed (on the torus, T3) helical streamlines, each of which is surrounded by a finite region of KAM invariant surfaces. For certain values of the parameters strong resonances occur which disrupt the surfaces. The remaining space is occupied by chaotic particle paths: here stagnation points may occur and, when they do, they are connected by a web of heteroclinic streamlines. When one of the parameters A, B or C vanishes the flow is integrable. In the neighbourhood, perturbation techniques can be used to predict strong resonances. A systematic search for integrable cases is done using Painlevé tests, i.e. studying complex-time singularities of fluid-particle trajectories. When ABC ≠ 0 recursive clustering of complex time singularities occurs that seems characteristic of non-integrable behaviour.) <|cite_end|>
\be\label{ABC}
\V_1(x,y,z)=\left\langle
\sin z+\cos y, \sin x+\cos z, \sin y+\cos x
\right\rangle.
\ee
and the Kolmogorov flow <|cite_start|> (Reference: Stretch, Twist, Fold: The Fast Dynamo: ) <|cite_end|> (or Archontis flow <|cite_start|> (Reference: Visualization of the kinematic regime of the Archontis dynamo: ) <|cite_end|>)
\be\label{K}
\V_2(x,y,z)=\left\langle
\sin z, \sin x,\sin y
\right\rangle.
\ee
While periodic in $(2\pi\T)^3$, these flows are well-known for exhibiting chaotic streamlines. They have been studied in many contexts, including the electromagnetic conductivity in kinematic dynamo problem, the traveling wave speed in reaction-diffusion-advection equation and the eddy diffusivity <|cite_start|> (Reference: Eddy Diffusivities in Scalar Transport: Standard and anomalous transport in incompressible flow is investigated using multiscale techniques. Eddy diffusivities emerge from the multiscale analysis through the solution of an auxiliary equation. From the latter it is derived an upper bound to eddy diffusivities, valid for both static and time‐dependent flow. The auxiliary problem is solved by a perturbative expansion in powers of the Peclet number resummed by Pade approximants and a conjugate gradient method. The results are compared to numerical simulations of tracers dispersion for three flows having different properties of Lagrangian chaos. It is shown on a concrete example how the presence of anomalous diffusion in deterministic flows can be revealed from the singular behavior of the eddy diffusivity at very small molecular diffusivities.) <|cite_end|> <|cite_start|> (Reference: Stretch, Twist, Fold: The Fast Dynamo: ) <|cite_end|> <|cite_start|> (Reference: ABC flows Then and Now: We review cellular space-periodic dynamos without scale separation, starting with early work in the 1980s on ABC flows with prescribed steady velocity fields u = (A sin z + C cos y, B sin x + A cos z, C sin y + B cos x). These naturally led to work done in the 1990s together with Mike Proctor on 2-D time-dependent versions which gave strong numerical evidence for the existence of fast dynamos growing on the flow turnover timescale. Similar calculations were subsequently performed for a spherical shell geometry jointly with Rainer Hollerbach. Also in the 1990s other studies began to take into account the back reaction of the Lorentz force when the flow rather than being prescribed was instead allowed to evolve in response to a forcing of the above ABC form. The dynamos that resulted were mostly filamentary and showed a disconcerting tendency to equilibrate with total magnetic energy much less than total kinetic energy in the low diffusivity limit relevant for astrophysics. The remarkable discovery by Archontis in 1999 of a non-filamentary dynamo with almost equal magnetic and kinetic energies showed that the unfavourable scalings for the filamentary case can be overcome; this dynamo used an ABC forcing with the cosines left out. Since then several authors have been struggling with partial success to understand just how this state of affairs comes about. Most recently efforts have been made to produce other examples of this type of dynamo, to investigate why the Archontis case is robust over a wide range of magnetic Prandtl numbers ν/η, and above all to understand its remarkable stability at very low diffusivities when non-magnetic flows are almost always unstable.) <|cite_end|> <|cite_start|> (Reference: Numerical calculations of fast dynamos in smooth velocity fields with realistic diffusion: ) <|cite_end|> <|cite_start|> (Reference: Finite Element Computation of KPP Front Speeds in 3D Cellular and ABC Flows: We carried out a computational study of propagation speeds of reaction-diffusion-advection fronts in three dimensional (3D) cellular and Arnold-Beltrami-Childress (ABC) flows with Kolmogorov-Petrovsky-Piskunov(KPP) nonlinearity. The variational principle of front speeds reduces the problem to a principal eigenvalue calculation. An adaptive streamline diffusion finite element method is used in the advection dominated regime. Numerical results showed that the front speeds are enhanced in cellular flows according to sublinear power law O (δ p ), p ≈ 0.13, δ the flow intensity. In ABC flows however, the enhancement is O (δ ) which can be attributed to the presence of principal vortex tubes in the streamlines. Poincare sections are used to visualize and quantify the chaotic fractions of ABC flows in the phase space. The effect of chaotic streamlines of ABC flows on front speeds is studied by varying the three parameters (a,b,c ) of the ABC flows. Speed enhancement along x direction is reduced as b (the parameter controling the flow variation along x ) increases at fixed (a,c ) > 0, more rapidly as the corresponding ABC streamlines become more chaotic.) <|cite_end|> <|cite_start|> (Reference: Sharp uniform in time error estimate on a stochastic structure-preserving Lagrangian method and computation of effective diffusivity in 3D chaotic flows: In this paper, we study the problem of computing the effective diffusivity for a particle moving in chaotic flows. Instead of solving a convection-diffusion type cell problem in the Eulerian formulation (arising from homogenization theory for the Fokker-Planck equation), we compute the motion of particles in the Lagrangian formulation, which is modeled by stochastic differential equations (SDEs). A robust numerical integrator based on a splitting method was proposed to solve the SDEs and a rigorous error analysis for the numerical integrator was provided using the backward error analysis (BEA) technique [29]. However, the upper bound in the error estimate is not sharp. In this paper, we propose a completely new and sharp error analysis for the numerical integrator that allows us to get rid of the exponential growth factor in our previous error estimate. Our new error analysis is based on a probabilistic approach, which interprets the solution process generated by our numerical integrator as a Markov process. By exploring the ergodicity of the solution process, we prove the convergence analysis of our method in computing the effective diffusivity over infinite time. We present numerical results to demonstrate the accuracy and efficiency of the proposed method in computing effective diffusivity for several chaotic flows, especially the Arnold-Beltrami-Childress (ABC) flow and the Kolmogorov flow in three-dimensional space.) <|cite_end|>.
Denote by $\X:\R\to\R^3$ a Lagrangian trajectory of ABC or Kolmogorov flow satisfying: $\dot{\X}(\cdot)=\V(\X(\cdot))$. In search of the ballistic orbits in say $x$-direction ($y$- and $z$- directions are similar), let the trajectory start from $yz$-plane and evaluate its large time asymptotic speed in $x$-direction as ($\e_1 = \langle 1,0,0\rangle$):
\be\label{Xbar}
\bar{x}=\lim_{t\to\infty}{\X(t)\cdot\e_1\over t},\,\,\, \X(0)=\langle0,y,z \rangle.
\ee
See \cref{Xmap}. The orbits $\X(t)$ are generated on a 800$\times$800 mesh of $\langle y,z\rangle\in(2\pi\T)^2$ by ODE solver in MATLAB (ode113), and the propagation speeds $\bar{x}$ are evaluated at $t=1000$. For ABC flow, $\bar{x}$ reaches maximum when $\X(0)\approx\langle0,5.942,1.571\rangle$ (accurate to three decimal places); for Kolmogorov flow, $\bar{x}$ reaches maximum when $\X(0)\approx\langle0,0.029,1.571\rangle$. It turns out that these orbits with maximum asymptotic speeds are periodic (modulo $2\pi$) in $x$-direction, that is, there exists $\tau>0$ such that $\X(\cdot+\tau)=\X(\cdot)+2\pi\cdot\e_1$. See \cref{Xorbit}. Also the orbits with minimum asymptotic speeds are periodic in negative $x$-direction: $\X(\cdot+\tau)=\X(\cdot)-2\pi\cdot\e_1$.
The periodic orbit of ABC flow was first proved to exist in <|cite_start|> (Reference: Periodic Orbits of the ABC Flow with A=B=C=1: In this paper, we prove that the ODE system $$ \begin{align*} \dot x &=\sin z+\cos y\\ \dot y &= \sin x+\cos z\\ \dot z &=\sin y + \cos x, \end{align*} $$ whose right-hand side is the Arnold-Beltrami-Childress (ABC) flow with parameters $A=B=C=1$, has periodic orbits on $(2\pi\mathbb T)^3$ with rotation vectors parallel to $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$. An application of this result is that the well-known G-equation model for turbulent combustion with this ABC flow on $\mathbb R^3$ has a linear (i.e., maximal possible) flame speed enhancement rate as the amplitude of the flow grows.) <|cite_end|>. The authors found an orbit that starts from the line segment $\{x=-\pi/2,y=0,z\in[0,\pi/2]\}$ and passes through the line segment $\{x=0,y\in[-\pi/2,3\pi/2],z=\pi/2\}$. Thanks to the symmetries of ABC flow, such orbit also inherits certain symmetries and therefore is periodic in $x$-direction. For Kolmogorov flow, we found numerically that the periodic orbit starts from line segment $\{x=0,y\in[0,\pi/2],z=\pi/2\}$ and passes through line segment $\{x=\pi/2,y=\pi,z\in[\pi,3\pi/2]\}$. See \cref{prop:1} and \cref{prop:2} for precise statements.
In turbulent combustion theory, G-equation is a front propagation model of thin flames <|cite_start|> (Reference: Turbulent combustion modeling: ) <|cite_end|> <|cite_start|> (Reference: Turbulent combustion modeling: ) <|cite_end|>:
\be\label{Geq}
{\partial G\over\partial t}+\V(\x)\cdot\nabla G+|\nabla G|=0.
\ee
Formulated by level set method, the flame front $\{G(\x,t)=0\}$ moves in the laminar velocity $\n=\nabla G/|\nabla G|$ due to fuel combustion along with the flow velocity $\V(\x)$ due to fuel convection. In three dimensional space, let the initial flame front be the $yz$-plane:
\be\label{IC}
G(\x,0)=\x\cdot\e_1,\,\,\,\x\in\R^3.
\ee
Eventually the flame front propagates in $x$-direction at the so called
{\it turbulent flame speed}:
\be\label{sTa}
s_T :=\lim_{t\to\infty}-{G(\x,t)\over t}
\ee
where convergence holds for all $\x$ and $s_T$ is independent of $\x$.
One fundamental issue in turbulent combustion theory is front speed enhancement due to fluid convection. In G-equation model, let the flow velocity be ABC flow (\ref{ABC}) with intensity $A>0$:
\be\label{AV}
\V(\x)=A\cdot\V_1(\x)=A\cdot\left\langle
\sin z+\cos y, \sin x+\cos z, \sin y+\cos x
\right\rangle
\ee
or Kolmogorov flow (\ref{K}): $\V(\x)=A\cdot\V_2(\x)=A\cdot\left\langle
\sin z, \sin x, \sin y\right\rangle$. We would like to study the growth rate of turbulent flame speed with respective to the flow intensity: $s_T(A)$ as a function of $A$. In the case of two dimensional cellular flow $\V(x,y)=A\!\cdot\!\langle-\sin x\cos y,\cos x\sin y\rangle$, the growth rate of turbulent flame speed is given by $s_T(A)=O(A/\log A)$ <|cite_start|> (Reference: Sharp asymptotic growth laws of turbulent flame speeds in cellular flows by inviscid Hamilton–Jacobi models: ) <|cite_end|>. Using the optimal control theory of Hamilton-Jacobi-Bellman (HJB) equation, the ballistic orbits are chosen as admissible trajectories to obtain the upper and lower bounds of turbulent flame speeds. See \cref{thm:1}.
Discretized as a monotone and consistent numerical Hamiltonian, finite difference computation of G-equation has been quite successful in two dimensional space <|cite_start|> (Reference: Level set methods and dynamic implicit surfaces: ) <|cite_end|> <|cite_start|> (Reference: A numerical study of turbulent flame speeds of curvature and strain G-equations in cellular flows: ) <|cite_end|>. When it comes to three dimensional space however, the computational cost increases considerably in large flow intensity regime. Specifically, the Courant number as well as the constraint of time step size (CFL condition, assuming $\Delta x=\Delta y=\Delta z$) reads
\be\label{CFL_FD}\begin{array}{ll}
(6A+\sqrt{3})\cdot\Delta t/\Delta x\leq 1 & \mbox{(ABC flow)}\\
(3A+\sqrt{3})\cdot\Delta t/\Delta x\leq 1 & \mbox{(Kolmogorov flow)}
\end{array}.
\ee
Therefore it is desirable to consider other numerical methods when the flow intensity $A$ is large.
Semi-Lagrangian (SL) scheme was first introduced as first-order approximation of scalar convection equation (also called the Courant-Isaacson-Rees scheme <|cite_start|> (Reference: On the solution of nonlinear hyperbolic differential equations by finite differences: ) <|cite_end|>). Further developed with many techniques such as dimensional splitting or higher order interpolation, semi-Lagrangian scheme has been very popular in weather forecast modeling and many other multidimensional atmospheric problems <|cite_start|> (Reference: Semi-Lagrangian integration schemes for atmospheric models-a review: Abstract The semi-Lagrangian methodology is described for a hierarchy of applications (passive advection, forced advection, and coupled sets of equations) of increasing complexity, in one, two, and three dimensions. Attention is focused on its accuracy, stability, and efficiency properties. Recent developments in applying semi-Lagrangian methods to 2D and 3D atmospheric flows in both Cartesian and spherical geometries are then reviewed. Finally, the current status of development is summarized, followed by a short discussion of future perspectives.) <|cite_end|>.
As the semi-Lagrangian scheme being applied on the advection term in G-equation, it remains to discretize the laminar term. In <|cite_start|> (Reference: Semi-Lagrangian advection-propagation (SLAP) scheme for three-dimensional interface tracking: ) <|cite_end|>, the solution is considered smooth, and the laminar velocity is incorporated into the flow velocity for higher order approximation. In <|cite_start|> (Reference: Semi-Lagrangian Schemes for Hamilton-Jacobi Equations, Discrete Representation Formulae and Godunov Methods: We study a class of semi-Lagrangian schemes which can be interpreted as a discrete version of the Hopf-Lax-Oleinik representation formula for the exact viscosity solution of first order evolutive Hamilton-Jacobi equations. That interpretation shows that the scheme is potentially accurate to any prescribed order. We discuss how the method can be implemented for convex and coercive Hamiltonians with a particular structure and how this method can be coupled with a discrete Legendre trasform. We also show that in one dimension, the first-order semi-Lagrangian scheme coincides with the integration of the Godunov scheme for the corresponding conservation laws. Several test illustrate the main features of semi-Lagrangian schemes for evolutive Hamilton-Jacobi equations.) <|cite_end|>, the laminar term is discretized by Hopf-Lax formula, and the solution is evaluated through function minimization. In our present work, thanks to operator splitting, the flow velocity is discretized by semi-Lagrangian scheme, and the function is evaluated by WENO interpolation <|cite_start|> (Reference: A Weighted Essentially Nonoscillatory, Large Time-Step Scheme for Hamilton--Jacobi Equations: We investigate the application of weighted essentially nonoscillatory (WENO) reconstructions to a class of semi-Lagrangian schemes for first order time-dependent Hamilton--Jacobi equations. In particular, we derive a general form of the scheme, study sufficient conditions for its convergence with high-order reconstructions, and perform numerical tests to study its efficiency. In addition, we prove that the weights of the WENO interpolants are positive for any order.) <|cite_end|> <|cite_start|> (Reference: Semi-Lagrangian Approximation schemes for linear and Hamilton-Jacobi equations: This largely self-contained book provides a unified framework of semi-Lagrangian strategy for the approximation of hyperbolic PDEs, with a special focus on Hamilton-Jacobi equations. The authors provide a rigorous discussion of the theory of viscosity solutions and the concepts underlying the construction and analysis of difference schemes; they then proceed to high-order semi-Lagrangian schemes and their applications to problems in fluid dynamics, front propagation, optimal control, and image processing. The developments covered in the text and the references come from a wide range of literature.) <|cite_end|>; the laminar velocity is discretized by finite difference method, and the derivatives are evaluated by HJ WENO scheme <|cite_start|> (Reference: Weighted ENO Schemes for Hamilton-Jacobi Equations: In this paper, we present a weighted ENO (essentially nonoscillatory) scheme to approximate the viscosity solution of the Hamilton--Jacobi equation: $$ \phi_t + H(x_1,\ldots,x_d,t,\phi,\phi_{x_1},\ldots,\phi_{x_d}) = 0. $$ This weighted ENO scheme is constructed upon and has the same stencil nodes as the third order ENO scheme but can be as high as fifth order accurate in the smooth part of the solution. In addition to the accuracy improvement, numerical comparisons between the two schemes also demonstrate that the weighted ENO scheme is more robust than the ENO scheme.) <|cite_end|> <|cite_start|> (Reference: Level set methods and dynamic implicit surfaces: ) <|cite_end|> <|cite_start|> (Reference: High order weighted essentially nonoscillatory schemes for
convection dominated problems: High order accurate weighted essentially nonoscillatory (WENO) schemes are relatively new but have gained rapid popularity in numerical solutions of hyperbolic partial differential equations (PDEs) and other convection dominated problems. The main advantage of such schemes is their capability to achieve arbitrarily high order formal accuracy in smooth regions while maintaining stable, nonoscillatory, and sharp discontinuity transitions. The schemes are thus especially suitable for problems containing both strong discontinuities and complex smooth solution features. WENO schemes are robust and do not require the user to tune parameters. At the heart of the WENO schemes is actually an approximation procedure not directly related to PDEs, hence the WENO procedure can also be used in many non-PDE applications. In this paper we review the history and basic formulation of WENO schemes, outline the main ideas in using WENO schemes to solve various hyperbolic PDEs and other convection dominated problems, and present a collection of applications in areas including computational fluid dynamics, computational astronomy and astrophysics, semiconductor device simulation, traffic flow models, computational biology, and some non-PDE applications. Finally, we mention a few topics concerning WENO schemes that are currently under investigation.) <|cite_end|>.
The rest of paper is organized as follows. In \cref{sec:2}, we find the ballistic orbits of ABC and Kolmogorov flows numerically and verify that these orbits are periodic (modulo 2$\pi$) in $x$-direction. In \cref{sec:3}, we present the control formulation of G-equation and obtain the estimates of turbulent flame speeds. In \cref{sec:4}, we provide the semi-Lagrangian discretization of G-equation and the numerical results of turbulent flame speeds. In \cref{sec:5}, we conclude the paper with comments and future works.
\begin{figure}
\begin{center}
\includegraphics[width=\textwidth]{M138769f1.pdf}
\end{center}
\caption{Approximate asymptotic speeds of Lagrangian orbits in $x$-direction evaluated by (\ref{Xbar}). Left: ABC flow. Right: Kolmogorov flow.}
\label{Xmap}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{M138769f2.pdf}
\caption{Ballistic orbits periodic (modulo 2$\pi$) in $x$-direction. Left: ABC flow. Right: Kolmogorov flow.}
\label{Xorbit}
\end{figure} <|paper_end|> | [
"<|reference_start|> Sur la topologie des écoulements stationnaires des fluides parfaits: <|reference_end|>",
"<|reference_start|> Sharp uniform in time error estimate on a stochastic structure-preserving Lagrangian method and computation of effective diffusivity in 3D chaotic flows: In this paper, we study the problem of computing the effective diffusivity for a particle moving in chaotic flows. Instead of solving a convection-diffusion type cell problem in the Eulerian formulation (arising from homogenization theory for the Fokker-Planck equation), we compute the motion of particles in the Lagrangian formulation, which is modeled by stochastic differential equations (SDEs). A robust numerical integrator based on a splitting method was proposed to solve the SDEs and a rigorous error analysis for the numerical integrator was provided using the backward error analysis (BEA) technique [29]. However, the upper bound in the error estimate is not sharp. In this paper, we propose a completely new and sharp error analysis for the numerical integrator that allows us to get rid of the exponential growth factor in our previous error estimate. Our new error analysis is based on a probabilistic approach, which interprets the solution process generated by our numerical integrator as a Markov process. By exploring the ergodicity of the solution process, we prove the convergence analysis of our method in computing the effective diffusivity over infinite time. We present numerical results to demonstrate the accuracy and efficiency of the proposed method in computing effective diffusivity for several chaotic flows, especially the Arnold-Beltrami-Childress (ABC) flow and the Kolmogorov flow in three-dimensional space. <|reference_end|>",
"<|reference_start|> Semi-Lagrangian Schemes for Hamilton-Jacobi Equations, Discrete Representation Formulae and Godunov Methods: We study a class of semi-Lagrangian schemes which can be interpreted as a discrete version of the Hopf-Lax-Oleinik representation formula for the exact viscosity solution of first order evolutive Hamilton-Jacobi equations. That interpretation shows that the scheme is potentially accurate to any prescribed order. We discuss how the method can be implemented for convex and coercive Hamiltonians with a particular structure and how this method can be coupled with a discrete Legendre trasform. We also show that in one dimension, the first-order semi-Lagrangian scheme coincides with the integration of the Godunov scheme for the corresponding conservation laws. Several test illustrate the main features of semi-Lagrangian schemes for evolutive Hamilton-Jacobi equations. <|reference_end|>",
"<|reference_start|> High order weighted essentially nonoscillatory schemes for\nconvection dominated problems: High order accurate weighted essentially nonoscillatory (WENO) schemes are relatively new but have gained rapid popularity in numerical solutions of hyperbolic partial differential equations (PDEs) and other convection dominated problems. The main advantage of such schemes is their capability to achieve arbitrarily high order formal accuracy in smooth regions while maintaining stable, nonoscillatory, and sharp discontinuity transitions. The schemes are thus especially suitable for problems containing both strong discontinuities and complex smooth solution features. WENO schemes are robust and do not require the user to tune parameters. At the heart of the WENO schemes is actually an approximation procedure not directly related to PDEs, hence the WENO procedure can also be used in many non-PDE applications. In this paper we review the history and basic formulation of WENO schemes, outline the main ideas in using WENO schemes to solve various hyperbolic PDEs and other convection dominated problems, and present a collection of applications in areas including computational fluid dynamics, computational astronomy and astrophysics, semiconductor device simulation, traffic flow models, computational biology, and some non-PDE applications. Finally, we mention a few topics concerning WENO schemes that are currently under investigation. <|reference_end|>"
] | [
5,
14,
24,
29
] | {"<|multi_cite_1_1|>": "ss-1825001", "<|multi_cite_1_2|>": "ss-860364", "<|multi_cite_1_3|>": "ss-1825001", "<|multi_cite_1_4|>": "ss-1065704", "<|multi_cite_1_5|>": "ss-1065705", "<|multi_cite_2_1|>": "ss-1065706", "<|multi_cite_2_2|>": "ss-1065707", "<|cite_3|>": "ss-1065708", "<|cite_4|>": "ss-1065709", "<|multi_cite_5_1|>": "ss-1065710", "<|multi_cite_5_2|>": "ss-1065708", "<|multi_cite_5_3|>": "ss-1065711", "<|multi_cite_5_4|>": "ss-1065712", "<|multi_cite_5_5|>": "ss-1065713", "<|multi_cite_5_6|>": "ss-1290167", "<|cite_6|>": "ss-1065714", "<|multi_cite_7_1|>": "ss-1825001", "<|multi_cite_7_2|>": "ss-1825001", "<|multi_cite_8_2|>": "ss-1065715", "<|multi_cite_9_1|>": "ss-957540", "<|multi_cite_9_2|>": "ss-1065716", "<|cite_10|>": "ss-759180", "<|cite_11|>": "ss-806983", "<|cite_12|>": "ss-1065717", "<|cite_13|>": "ss-911231", "<|multi_cite_14_1|>": "ss-1065718", "<|multi_cite_14_2|>": "ss-1114404", "<|multi_cite_15_1|>": "ss-1065719", "<|multi_cite_15_2|>": "ss-957540", "<|multi_cite_15_3|>": "ss-905932"} |
2406.17335-1 | <|cite_start|> (Reference: Billion-scale Commodity Embedding for E-commerce Recommendation in Alibaba: Recommender systems (RSs) have been the most important technology for increasing the business in Taobao, the largest online consumer-to-consumer (C2C) platform in China. The billion-scale data in Taobao creates three major challenges to Taobao's RS: scalability, sparsity and cold start. In this paper, we present our technical solutions to address these three challenges. The methods are based on the graph embedding framework. We first construct an item graph from users' behavior history. Each item is then represented as a vector using graph embedding. The item embeddings are employed to compute pairwise similarities between all items, which are then used in the recommendation process. To alleviate the sparsity and cold start problems, side information is incorporated into the embedding framework. We propose two aggregation methods to integrate the embeddings of items and the corresponding side information. Experimental results from offline experiments show that methods incorporating side information are superior to those that do not. Further, we describe the platform upon which the embedding methods are deployed and the workflow to process the billion-scale data in Taobao. Using online A/B test, we show that the online Click-Through-Rate (CTRs) are improved comparing to the previous recommendation methods widely used in Taobao, further demonstrating the effectiveness and feasibility of our proposed methods in Taobao's live production environment.) <|cite_end|> <|cite_start|> (Reference: MultiBiSage: A Web-Scale Recommendation System Using Multiple Bipartite Graphs at Pinterest: Graph Convolutional Networks (GCN) can efficiently integrate graph structure and node features to learn high-quality node embeddings. These embeddings can then be used for several tasks such as recommendation and search. At Pinterest, we have developed and deployed PinSage, a data-efficient GCN that learns pin embeddings from the Pin-Board graph. The Pin-Board graph contains pin and board entities and the graph captures the pin belongs to a board interaction. However, there exist several entities at Pinterest such as users, idea pins, creators, and there exist heterogeneous interactions among these entities such as add-to-cart, follow, long-click. In this work, we show that training deep learning models on graphs that captures these diverse interactions would result in learning higher-quality pin embeddings than training PinSage on only the Pin-Board graph. To that end, we model the diverse entities and their diverse interactions through multiple bipartite graphs and propose a novel data-efficient MultiBiSage model. MultiBiSage can capture the graph structure of multiple bipartite graphs to learn high-quality pin embeddings. We take this pragmatic approach as it allows us to utilize the existing infrastructure developed at Pinterest -- such as Pixie system that can perform optimized random-walks on billion node graphs, along with existing training and deployment workflows. We train MultiBiSage on six bipartite graphs including our Pin-Board graph. Our offline metrics show that MultiBiSage significantly outperforms the deployed latest version of PinSage on multiple user engagement metrics.) <|cite_end|>.
It is worth noting that, despite the differences in downstream outputs and recommendation models, LERSs designed for both collaborative filtering and content-based recommendation in fact bear the same goal of reducing the parameter usage of the embedding table -- one for representing content features and the other for representing user/item IDs.
However, the embedding compression paradigms in LERSs are commonly developed in a task-specific fashion, so do the evaluations in existing papers.
Given the shared goal of embedding compression between tasks, there is another important question related to cross-task generalizability to answer:
\textbf{(RQ2) Do methods that demonstrate strong performance in one task exhibit similarly strong performance in a different recommendation task?}
At the same time, considering that LERSs are centered around model scalability, many related metrics other than the parameter size and recommendation accuracy, especially the inference speed and runtime memory consumption, remain largely unexplored in the existing research.
The inference speed is crucial to user experience and energy efficiency, while the runtime memory consumption is closely connected to scalability as it determines whether or not an LERS is executable on specific memory-constrained devices (e.g., TV boxes), and lower runtime memory also supports a larger batch size to speed up training.
Unfortunately, in the pursuit of lower parameter sizes, most LERSs introduce an overhead on these two metrics. For example, the compositional embedding table in TTRec <|cite_start|> (Reference: TT-Rec: Tensor Train Compression for Deep Learning Recommendation Models: The memory capacity of embedding tables in deep learning recommendation models (DLRMs) is increasing dramatically from tens of GBs to TBs across the industry. Given the fast growth in DLRMs, novel solutions are urgently needed, in order to enable fast and efficient DLRM innovations. At the same time, this must be done without having to exponentially increase infrastructure capacity demands. In this paper, we demonstrate the promising potential of Tensor Train decomposition for DLRMs (TT-Rec), an important yet under-investigated context. We design and implement optimized kernels (TT-EmbeddingBag) to evaluate the proposed TT-Rec design. TT-EmbeddingBag is 3 times faster than the SOTA TT implementation. The performance of TT-Rec is further optimized with the batched matrix multiplication and caching strategies for embedding vector lookup operations. In addition, we present mathematically and empirically the effect of weight initialization distribution on DLRM accuracy and propose to initialize the tensor cores of TT-Rec following the sampled Gaussian distribution. We evaluate TT-Rec across three important design space dimensions -- memory capacity, accuracy, and timing performance -- by training MLPerf-DLRM with Criteo's Kaggle and Terabyte data sets. TT-Rec achieves 117 times and 112 times model size compression, for Kaggle and Terabyte, respectively. This impressive model size reduction can come with no accuracy nor training time overhead as compared to the uncompressed baseline.) <|cite_end|>needs to be computed via a series of tensor multiplications, compromising the inference speed.
Pruning-based methods, such as PEP <|cite_start|> (Reference: Learnable Embedding Sizes for Recommender Systems: The embedding-based representation learning is commonly used in deep learning recommendation models to map the raw sparse features to dense vectors. The traditional embedding manner that assigns a uniform size to all features has two issues. First, the numerous features inevitably lead to a gigantic embedding table that causes a high memory usage cost. Second, it is likely to cause the over-fitting problem for those features that do not require too large representation capacity. Existing works that try to address the problem always cause a significant drop in recommendation performance or suffers from the limitation of unaffordable training time cost. In this paper, we proposed a novel approach, named PEP (short for Plug-in Embedding Pruning), to reduce the size of the embedding table while avoiding the drop of recommendation accuracy. PEP prunes embedding parameter where the pruning threshold(s) can be adaptively learned from data. Therefore we can automatically obtain a mixed-dimension embedding-scheme by pruning redundant parameters for each feature. PEP is a general framework that can plug in various base recommendation models. Extensive experiments demonstrate it can efficiently cut down embedding parameters and boost the base model's performance. Specifically, it achieves strong recommendation performance while reducing 97-99% parameters. As for the computation cost, PEP only brings an additional 20-30% time cost compared with base models. Codes are available at https://github.com/ssui-liu/learnable-embed-sizes-for-RecSys.) <|cite_end|>, introduce substantial memory overhead due to additional masks over the full embedding table.
The absence of those scalability metrics in evaluation renders it unclear if a particular LERS is a feasible solution for each given deployment configuration.
Consequently, real-world adoptions for LERSs will likely be deterred without comprehensively benchmarking their resource requirements.
Thus, we wonder: \textbf{(RQ3) How is the real-world usability of these LERSs in terms of efficiency and memory consumption during training and inference?}
Motivated to answer these questions, in this paper, we take a formal approach to benchmark various recently proposed embedding compression techniques for LERSs.
Specifically, to address \textbf{RQ1}, we select a diversity of methods designed for collaborative filtering or content-based recommendation tasks, and performed thorough benchmarking on both tasks. For each task, we further use two real-world datasets, where all methods are universally tested with three different compression goals (i.e., the target parameter sizes after compression). For each model in each dataset, we scientifically fine-tune the hyperparameters to ensure a fair comparison.
We also point out a practically important issue -- unlike performance-oriented RS research, the field of LERSs still lacks effective baselines, that is, an easy-to-use model that provide competitive performance across different settings.
Hence, we additionally put forward \textbf{magnitude-based pruning}, a simple, effective, and performant baseline in serveral settings. Moreover, we empirically justify and analyze the suitable use cases for magnitude pruning through our experiments.
To address \textbf{RQ2}, we extend our benchmarking across both collaborative filtering or content-based recommendation tasks for all LERSs selected, regardless of their original downstream tasks. To measure the cross-task generalizability, we first define performance retain rate, a metric to quantify how well each method preserves the full model's performance after compression. Then, we systematically analyze the obtained results to compare performance across tasks, highlighting the similarities and differences between each method's performance when being applied to the two representative recommendation tasks.
To address \textbf{RQ3}, we deploy all tested LERSs into two typical environments for LERSs: a GPU workstation and an edge device. We benchmark the time consumption and memory usage of both training and inference steps, covering both recommendation tasks. By doing so, we shed light on the overhead introduced by each method in real-world deployment.
In summary, our main contributions are:
\begin{itemize}
\item We extensively evaluate various LERSs' performance in two main recommendation tasks: content-based recommendation and collaborative filtering. Concretely, we cross-test different methods to verify their generalizability under different sparsity rates and tasks.
\item We show that magnitude-based pruning, a simple baseline for embedding compression, can also achieve competitive results compared to recently proposed methods.
\item We perform an efficiency benchmark and outline the key differences between the on-device and GPU-based settings, thus providing insights into the real-world performance of those LERSs in varying deployment environments.
\item We release all the source codes at \href{https://github.com/chenxing1999/recsys-benchmark}{https://github.com/chenxing1999/recsys-benchmark}, which include the implementation of various embedding compression methods in PyTorch, such that the community can reuse and apply them to subsequent research problems.
\end{itemize}
\begin{figure}
\centering
\subcaptionbox{Original}
{
\vspace{-.2cm}
\includegraphics[height=0.20\textheight]{table-and-figures/methods/emb_diagrams-original.drawio.pdf}
}
\hfill
\subcaptionbox{Compositional}
{
\vspace{-.2cm}
\includegraphics[height=0.20\textheight]{table-and-figures/methods/emb_diagrams-compo.drawio.pdf}
}
\hfill
\subcaptionbox{Pruning}
{
\vspace{-.2cm}
\includegraphics[height=0.20\textheight]{table-and-figures/methods/emb_diagrams-pruning.drawio.pdf}
}
\hfill
\subcaptionbox{NAS-based}
{
\vspace{-.2cm}
\includegraphics[height=0.20\textheight]{table-and-figures/methods/emb_diagrams-automl.pdf}
}
\caption{Illustration for main archetypes of LERSs embedding}
\label{fig:related-works}
\end{figure}
Related Work
\label{sec:related}
In this section, we provide relevant background for our research by reviewing the classic recommendation tasks, representative LERSs, and the benchmarking efforts in the recommendation literature.
\subsection{Lightweight Embeddings in Recommendation}
\subsubsection{Compositional Embedding-based LERSs}
This type of LERSs involves representing the original $n$ embedding vectors with substantially fewer parameters, where a common approach is to employ a smaller set of $m \ll n$ meta-embedding vectors. To compose a single embedding for each discrete feature, a unique subset of $t$ meta-embeddings is selected and combined. Mathematically, this is achieved by:
\begin{equation*}
\mathcal{H}(i) = \{ i_0, i_1, ..., i_t \} = \text{hash}(i),
\end{equation*}
where $i \in \mathbb{N}$ is the original index of the feature, $\text{hash}(\cdot)$ maps $i$ into $t$ distinct indices $\{ i_1, i_2, ..., i_t \} \in \mathbb{N}_{<m}$.
To simplify notation, let's assume that there is only one set of meta-embedding vectors, or one meta-embedding table $\mathbf{E}^{meta}\in \mathbb{R}^{m\times d}$, then the compositional embedding $\mathbf{e}_i$ of the $i$-th feature is:
\begin{equation*}
\mathbf{e}_i = \text{combine}(\mathbf{e}_{i_1}^{meta}, \mathbf{e}_{i_2}^{meta}, ..., \mathbf{e}_{i_t}^{meta}),
\end{equation*}
where each $\mathbf{e}_{i'}^{meta}$ corresponds to the $i'$-th row of $\mathbf{E}^{meta}$, and $\text{combine}(\cdot)$ is any operation that merges multiple vectors into one, e.g., multiplication, sum, or concatenation.
As an extension to this basic form, Shi et al. <|cite_start|> (Reference: Compositional Embeddings Using Complementary Partitions for Memory-Efficient Recommendation Systems: Modern deep learning-based recommendation systems exploit hundreds to thousands of different categorical features, each with millions of different categories ranging from clicks to posts. To respect the natural diversity within the categorical data, embeddings map each category to a unique dense representation within an embedded space. Since each categorical feature could take on as many as tens of millions of different possible categories, the embedding tables form the primary memory bottleneck during both training and inference. We propose a novel approach for reducing the embedding size in an end-to-end fashion by exploiting complementary partitions of the category set to produce a unique embedding vector for each category without explicit definition. By storing multiple smaller embedding tables based on each complementary partition and combining embeddings from each table, we define a unique embedding for each category at smaller memory cost. This approach may be interpreted as using a specific fixed codebook to ensure uniqueness of each category's representation. Our experimental results demonstrate the effectiveness of our approach over the hashing trick for reducing the size of the embedding tables in terms of model loss and accuracy, while retaining a similar reduction in the number of parameters.) <|cite_end|>use the quotient-remainder trick (QR) to hash the original embedding index $i$ into two new indices, which are used to extract embedding vectors from two meta-embedding tables.
These vectors can be combined through mathematical operations, such as multiplying, adding, or concatenating, to create an embedding vector representing the original.
Following this, MEmCom <|cite_start|> (Reference: Learning Compressed Embeddings for On-Device Inference: In deep learning, embeddings are widely used to represent categorical entities such as words, apps, and movies. An embedding layer maps each entity to a unique vector, causing the layer's memory requirement to be proportional to the number of entities. In the recommendation domain, a given category can have hundreds of thousands of entities, and its embedding layer can take gigabytes of memory. The scale of these networks makes them difficult to deploy in resource constrained environments. In this paper, we propose a novel approach for reducing the size of an embedding table while still mapping each entity to its own unique embedding. Rather than maintaining the full embedding table, we construct each entity's embedding "on the fly" using two separate embedding tables. The first table employs hashing to force multiple entities to share an embedding. The second table contains one trainable weight per entity, allowing the model to distinguish between entities sharing the same embedding. Since these two tables are trained jointly, the network is able to learn a unique embedding per entity, helping it maintain a discriminative capability similar to a model with an uncompressed embedding table. We call this approach MEmCom (Multi-Embedding Compression). We compare with state-of-the-art model compression techniques for multiple problem classes including classification and ranking. On four popular recommender system datasets, MEmCom had a 4% relative loss in nDCG while compressing the input embedding sizes of our recommendation models by 16x, 4x, 12x, and 40x. MEmCom outperforms the state-of-the-art techniques, which achieved 16%, 6%, 10%, and 8% relative loss in nDCG at the respective compression ratios. Additionally, MEmCom is able to compress the RankNet ranking model by 32x on a dataset with millions of users' interactions with games while incurring only a 1% relative loss in nDCG.) <|cite_end|>utilizes two meta embedding tables ($\mathbf{E_1} \in \mathbb{R}^{m \times d}, \mathbf{E_2} \in \mathbb{R}^{n \times 1}$) and a pair of indices ($i_1 = i\text{ mod }m, i_2 = i$), then multiplying two meta embedding vectors to get the final embedding associated with $i$.
The above methods are efficient as they only apply simple aggregation functions; however, this typically limits the performance due to collided meta-embeddings, especially at higher compression rates.
Another approach is TT-Rec <|cite_start|> (Reference: TT-Rec: Tensor Train Compression for Deep Learning Recommendation Models: The memory capacity of embedding tables in deep learning recommendation models (DLRMs) is increasing dramatically from tens of GBs to TBs across the industry. Given the fast growth in DLRMs, novel solutions are urgently needed, in order to enable fast and efficient DLRM innovations. At the same time, this must be done without having to exponentially increase infrastructure capacity demands. In this paper, we demonstrate the promising potential of Tensor Train decomposition for DLRMs (TT-Rec), an important yet under-investigated context. We design and implement optimized kernels (TT-EmbeddingBag) to evaluate the proposed TT-Rec design. TT-EmbeddingBag is 3 times faster than the SOTA TT implementation. The performance of TT-Rec is further optimized with the batched matrix multiplication and caching strategies for embedding vector lookup operations. In addition, we present mathematically and empirically the effect of weight initialization distribution on DLRM accuracy and propose to initialize the tensor cores of TT-Rec following the sampled Gaussian distribution. We evaluate TT-Rec across three important design space dimensions -- memory capacity, accuracy, and timing performance -- by training MLPerf-DLRM with Criteo's Kaggle and Terabyte data sets. TT-Rec achieves 117 times and 112 times model size compression, for Kaggle and Terabyte, respectively. This impressive model size reduction can come with no accuracy nor training time overhead as compared to the uncompressed baseline.) <|cite_end|>, which employs tensor-train decomposition (TTD) and a customized weight initialization to compress the embedding table.
Because TTD can transform the exponential storage requirement into a linear function, it can achieve a higher compression rate than previous methods.
ODRec <|cite_start|> (Reference: On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation: Modern recommender systems operate in a fully server-based fashion. To cater to millions of users, the frequent model maintaining and the high-speed processing for concurrent user requests are required, which comes at the cost of a huge carbon footprint. Meanwhile, users need to upload their behavior data even including the immediate environmental context to the server, raising the public concern about privacy. On-device recommender systems circumvent these two issues with cost-conscious settings and local inference. However, due to the limited memory and computing resources, on-device recommender systems are confronted with two fundamental challenges: (1) how to reduce the size of regular models to fit edge devices? (2) how to retain the original capacity? Previous research mostly adopts tensor decomposition techniques to compress the regular recommendation model with limited compression ratio so as to avoid drastic performance degradation. In this paper, we explore ultra-compact models for next-item recommendation, by loosing the constraint of dimensionality consistency in tensor decomposition. Meanwhile, to compensate for the capacity loss caused by compression, we develop a self-supervised knowledge distillation framework which enables the compressed model (student) to distill the essential information lying in the raw data, and improves the long-tail item recommendation through an embedding-recombination strategy with the original model (teacher). The extensive experiments on two benchmarks demonstrate that, with 30x model size reduction, the compressed model almost comes with no accuracy loss, and even outperforms its uncompressed counterpart in most cases.) <|cite_end|>proposes to further compress the model with semi-tensor product-based tensor-train decomposition (STTD), and compensates for the higher compression rate with knowledge distillation.
DHE <|cite_start|> (Reference: Learning to Embed Categorical Features without Embedding Tables for Recommendation: Embedding learning of categorical features (e.g. user/item IDs) is at the core of various recommendation models including matrix factorization and neural collaborative filtering. The standard approach creates an embedding table where each row represents a dedicated embedding vector for every unique feature value. However, this method fails to efficiently handle high-cardinality features and unseen feature values (e.g. new video ID) that are prevalent in real-world recommendation systems. In this paper, we propose an alternative embedding framework Deep Hash Embedding (DHE), replacing embedding tables by a deep embedding network to compute embeddings on the fly. DHE first encodes the feature value to a unique identifier vector with multiple hashing functions and transformations, and then applies a DNN to convert the identifier vector to an embedding. The encoding module is deterministic, non-learnable, and free of storage, while the embedding network is updated during the training time to learn embedding generation. Empirical results show that DHE achieves comparable AUC against the standard one-hot full embedding, with smaller model sizes. Our work sheds light on the design of DNN-based alternative embedding schemes for categorical features without using embedding table lookup.) <|cite_end|>proposes to use a deterministic hash function to create a pseudo-embedding table, which the authors fed to an MLP model to produce the embedding. DHE model achieves good results and can be further compressed with other techniques; however, requiring much more time to train and infer than other methods.
\subsubsection{Pruning} This is one of the most classic approaches to compressing deep learning models in general and recommendation models in particular. These methods involve setting a portion of neuron values to zero, effectively removing them from the model.
\begin{equation*}
\mathbf{e_i} = \hat{\mathbf{E}}_i = (\mathbf{E} \odot \mathbf{M})_i, \text{with } \mathbf{M} \in \{ 0, 1 \},
\end{equation*}
where $\mathbf{E}$ denotes the full learnable embedding table, $\mathbf{M}$ is the embedding mask, $\odot$ is element-wise multiplication.
The main challenge of pruning-based methods is how to effectively find $\mathbf{M}$.
The most traditional approach in this category is magnitude pruning <|cite_start|> (Reference: {Pruning Versus Clipping in Neural Networks: Dans un reseau neuronal, le nombre d'interconnexions est reduit par elimination des liaisons les plus faibles. Les performances sont ensuite ameliorees par application du theoreme d'apprentissage) <|cite_end|> <|cite_start|> (Reference: Learning both Weights and Connections for Efficient Neural Networks: Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we describe a method to reduce the storage and computation required by neural networks by an order of magnitude without affecting their accuracy by learning only the important connections. Our method prunes redundant connections using a three-step method. First, we train the network to learn which connections are important. Next, we prune the unimportant connections. Finally, we retrain the network to fine tune the weights of the remaining connections. On the ImageNet dataset, our method reduced the number of parameters of AlexNet by a factor of 9x, from 61 million to 6.7 million, without incurring accuracy loss. Similar experiments with VGG-16 found that the number of parameters can be reduced by 13x, from 138 million to 10.3 million, again with no loss of accuracy.) <|cite_end|>, where after the initial training phase, the weights are sorted by the magnitude of their values and the lowest-ranked ones are zeroed out.
Leveraging this,
DeepLight <|cite_start|> (Reference: DeepLight: Deep Lightweight Feature Interactions for Accelerating CTR Predictions in Ad Serving: Click-through rate (CTR) prediction is a crucial task in online display advertising. The embedding-based neural networks have been proposed to learn both explicit feature interactions through a shallow component and deep feature interactions using a deep neural network (DNN) component. These sophisticated models, however, slow down the prediction inference by at least hundreds of times. To address the issue of significantly increased serving delay and high memory usage for ad serving in production, this paper presents \emph{DeepLight}: a framework to accelerate the CTR predictions in three aspects: 1) accelerate the model inference via explicitly searching informative feature interactions in the shallow component; 2) prune redundant layers and parameters at intra-layer and inter-layer level in the DNN component; 3) promote the sparsity of the embedding layer to preserve the most discriminant signals. By combining the above efforts, the proposed approach accelerates the model inference by 46X on Criteo dataset and 27X on Avazu dataset without any loss on the prediction accuracy. This paves the way for successfully deploying complicated embedding-based neural networks in production for ad serving.) <|cite_end|>gradually prunes the embedding table in the training phase based on the magnitude values until reaching the target memory budget.
Later, PEP <|cite_start|> (Reference: Learnable Embedding Sizes for Recommender Systems: The embedding-based representation learning is commonly used in deep learning recommendation models to map the raw sparse features to dense vectors. The traditional embedding manner that assigns a uniform size to all features has two issues. First, the numerous features inevitably lead to a gigantic embedding table that causes a high memory usage cost. Second, it is likely to cause the over-fitting problem for those features that do not require too large representation capacity. Existing works that try to address the problem always cause a significant drop in recommendation performance or suffers from the limitation of unaffordable training time cost. In this paper, we proposed a novel approach, named PEP (short for Plug-in Embedding Pruning), to reduce the size of the embedding table while avoiding the drop of recommendation accuracy. PEP prunes embedding parameter where the pruning threshold(s) can be adaptively learned from data. Therefore we can automatically obtain a mixed-dimension embedding-scheme by pruning redundant parameters for each feature. PEP is a general framework that can plug in various base recommendation models. Extensive experiments demonstrate it can efficiently cut down embedding parameters and boost the base model's performance. Specifically, it achieves strong recommendation performance while reducing 97-99% parameters. As for the computation cost, PEP only brings an additional 20-30% time cost compared with base models. Codes are available at https://github.com/ssui-liu/learnable-embed-sizes-for-RecSys.) <|cite_end|>applies ``soft-thresholding'' to iteratively prune the parameters in the first training step. Subsequently, the model is retrained based on the found mask with the same initialized parameters (Lottery ticket hypothesis <|cite_start|> (Reference: The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks: Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance. We find that a standard pruning technique naturally uncovers subnetworks whose initializations made them capable of training effectively. Based on these results, we articulate the "lottery ticket hypothesis:" dense, randomly-initialized, feed-forward networks contain subnetworks ("winning tickets") that - when trained in isolation - reach test accuracy comparable to the original network in a similar number of iterations. The winning tickets we find have won the initialization lottery: their connections have initial weights that make training particularly effective. We present an algorithm to identify winning tickets and a series of experiments that support the lottery ticket hypothesis and the importance of these fortuitous initializations. We consistently find winning tickets that are less than 10-20% of the size of several fully-connected and convolutional feed-forward architectures for MNIST and CIFAR10. Above this size, the winning tickets that we find learn faster than the original network and reach higher test accuracy.) <|cite_end|>).
The ``soft-thresholding'' technique allows PEP to provide a flexible embedding size for each feature.
However, PEP has high training overhead and is hard to tune for a specific compression rate due to two training steps and various hyperparameters affecting performance.
In contrast, SSEDS <|cite_start|> (Reference: Single-shot Embedding Dimension Search in Recommender System: As a crucial component of most modern deep recommender systems, feature embedding maps high-dimensional sparse user/item features into low-dimensional dense embeddings. However, these embeddings are usually assigned a unified dimension, which suffers from the following issues: (1) high memory usage and computation cost. (2) sub-optimal performance due to inferior dimension assignments. In order to alleviate the above issues, some works focus on automated embedding dimension search by formulating it as hyper-parameter optimization or embedding pruning problems. However, they either require well-designed search space for hyperparameters or need time-consuming optimization procedures. In this paper, we propose a Single-Shot Embedding Dimension Search method, called SSEDS, which can efficiently assign dimensions for each feature field via a single-shot embedding pruning operation while maintaining the recommendation accuracy of the model. Specifically, it introduces a criterion for identifying the importance of each embedding dimension for each feature field. As a result, SSEDS could automatically obtain mixed-dimensional embeddings by explicitly reducing redundant embedding dimensions based on the corresponding dimension importance ranking and the predefined parameter budget. Furthermore, the proposed SSEDS is model-agnostic, meaning that it could be integrated into different base recommendation models. The extensive offline experiments are conducted on two widely used public datasets for CTR prediction tasks, and the results demonstrate that SSEDS can still achieve strong recommendation performance even if it has reduced 90\% parameters. Moreover, SSEDS has also been deployed on the WeChat Subscription platform for practical recommendation services. The 7-day online A/B test results show that SSEDS can significantly improve the performance of the online recommendation model.) <|cite_end|>first trains the original model. Then, they determine the embedding mask with the proposed saliency score computed through the gradient and retrain the model with the newfound mask.
While SSEDS only introduces minor overhead in one forward and backward pass to calculate the embedding mask, it assumes similar embedding sizes for features in the same field, leading to constrained performance.
Dynamic Sparse Learning (DSL) <|cite_start|> (Reference: Dynamic Sparse Learning: A Novel Paradigm for Efficient Recommendation: In the realm of deep learning-based recommendation systems, the increasing computational demands, driven by the growing number of users and items, pose a significant challenge to practical deployment. This challenge is primarily twofold: reducing the model size while effectively learning user and item representations for efficient recommendations. Despite considerable advancements in model compression and architecture search, prevalent approaches face notable constraints. These include substantial additional computational costs from pre-training/re-training in model compression and an extensive search space in architecture design. Additionally, managing complexity and adhering to memory constraints is problematic, especially in scenarios with strict time or space limitations. Addressing these issues, this paper introduces a novel learning paradigm, Dynamic Sparse Learning (DSL), tailored for recommendation models. DSL innovatively trains a lightweight sparse model from scratch, periodically evaluating and dynamically adjusting each weight's significance and the model's sparsity distribution during the training. This approach ensures a consistent and minimal parameter budget throughout the full learning lifecycle, paving the way for "end-to-end" efficiency from training to inference. Our extensive experimental results underline DSL's effectiveness, significantly reducing training and inference costs while delivering comparable recommendation performance.) <|cite_end|>dynamically adjusts the sparsity distribution of model weights by pruning and growth strategies to eliminate redundant parameters and activate important ones. Specifically, During training, they initially prune a significant portion of the model weights, fine-tune the model, and then prune and regrow again with a smaller amount after a few iterations.
\subsubsection{NAS-based} Methods from this category search for the most optimal model structure in a predefined search space, typically by reinforcement learning or an evolutionary algorithm. They are commonly formulated as a two-level optimization problem w.r.t. both training and validation data:
\begin{equation*}
\hat{S} = \arg\!\min_S L_{val} \left( \hat{\Theta}, S \right) \text{, s.t. } \hat{{\Theta}} = \arg\!\min_{\Theta} L_{train}\left({\Theta}, S \right),
\end{equation*}
where $S$ denotes structure parameters, ${\Theta}$ denotes the model parameters.
One of the first works in this category is NIS (Neural Input Search) <|cite_start|> (Reference: Neural Input Search for Large Scale Recommendation Models: Recommendation problems with large numbers of discrete items, such as products, webpages, or videos, are ubiquitous in the technology industry. Deep neural networks are being increasingly used for these recommendation problems. These models use embeddings to represent discrete items as continuous vectors, and the vocabulary sizes and embedding dimensions, although heavily influence the model's accuracy, are often manually selected in a heuristical manner. We present Neural Input Search (NIS), a technique for learning the optimal vocabulary sizes and embedding dimensions for categorical features. The goal is to maximize prediction accuracy subject to a constraint on the total memory used by all embeddings. Moreover, we argue that the traditional Single-size Embedding (SE), which uses the same embedding dimension for all values of a feature, suffers from inefficient usage of model capacity and training data. We propose a novel type of embedding, namely Multi-size Embedding (ME), which allows the embedding dimension to vary for different values of the feature. During training we use reinforcement learning to find the optimal vocabulary size for each feature and embedding dimension for each value of the feature. In experiments on two common types of large scale recommendation problems, i.e. retrieval and ranking problems, NIS automatically found better vocabulary and embedding sizes that result in $6.8\%$ and $1.8\%$ relative improvements on Recall@1 and ROC-AUC over manually optimized ones.) <|cite_end|>, which splits the original embedding table into multiple smaller embedding blocks to create the search space. Then, NIS applies a policy network to determine the best set of embedding blocks given a memory budget.
AutoEmb uses controllers that take features' popularity to suggest the embedding size of various users and items for the recommendation network. They employ differentiable architecture search (DARTS <|cite_start|> (Reference: DARTS: Differentiable Architecture Search: This paper addresses the scalability challenge of architecture search by formulating the task in a differentiable manner. Unlike conventional approaches of applying evolution or reinforcement learning over a discrete and non-differentiable search space, our method is based on the continuous relaxation of the architecture representation, allowing efficient search of the architecture using gradient descent. Extensive experiments on CIFAR-10, ImageNet, Penn Treebank and WikiText-2 show that our algorithm excels in discovering high-performance convolutional architectures for image classification and recurrent architectures for language modeling, while being orders of magnitude faster than state-of-the-art non-differentiable techniques. Our implementation has been made publicly available to facilitate further research on efficient architecture search algorithms.) <|cite_end|>) to solve the bi-level optimization problem, where the first and second stages are to optimize the recommendation network's weight on the training set and the controllers' weight on the validation set respectively.
While also applying DARTS, AutoDim <|cite_start|> (Reference: Autodim: Field-aware embedding dimension searchin recommender systems: Practical large-scale recommender systems usually contain thousands of feature fields from users, items, contextual information, and their interactions. Most of them empirically allocate a unified dimension to all feature fields, which is memory inefficient. Thus it is highly desired to assign various embedding dimensions to different feature fields according to their importance and predictability. Due to the large amounts of feature fields and the nuanced relationship between embedding dimensions with feature distributions and neural network architectures, manually allocating embedding dimensions in practical recommender systems can be challenging. To this end, we propose an AutoML-based framework (AutoDim) in this paper, which can automatically select dimensions for different feature fields in a data-driven fashion. Specifically, we first proposed an end-to-end differentiable framework that can calculate the weights over various dimensions in a soft and continuous manner for feature fields, and an AutoML-based optimization algorithm; then, we derive a hard and discrete embedding component architecture according to the maximal weights and retrain the whole recommender framework. We conduct extensive experiments on benchmark datasets to validate the effectiveness of AutoDim.) <|cite_end|>uses a set of weights, which directly represent the probability of dimension sizes, and leverages the Gumbell-Softmax technique <|cite_start|> (Reference: Categorical Reparameterization with Gumbel-Softmax: Categorical variables are a natural choice for representing discrete structure in the world. However, stochastic neural networks rarely use categorical latent variables due to the inability to backpropagate through samples. In this work, we present an efficient gradient estimator that replaces the non-differentiable sample from a categorical distribution with a differentiable sample from a novel Gumbel-Softmax distribution. This distribution has the essential property that it can be smoothly annealed into a categorical distribution. We show that our Gumbel-Softmax estimator outperforms state-of-the-art gradient estimators on structured output prediction and unsupervised generative modeling tasks with categorical latent variables, and enables large speedups on semi-supervised classification.) <|cite_end|>to optimize these parameters.
However, DARTS-based methods suffer from high training costs.
RULE <|cite_start|> (Reference: Learning Elastic Embeddings for Customizing On-Device Recommenders: In today's context, deploying data-driven services like recommendation on edge devices instead of cloud servers becomes increasingly attractive due to privacy and network latency concerns. A common practice in building compact on-device recommender systems is to compress their embeddings which are normally the cause of excessive parameterization. However, despite the vast variety of devices and their associated memory constraints, existing memory-efficient recommender systems are only specialized for a fixed memory budget in every design and training life cycle, where a new model has to be retrained to obtain the optimal performance while adapting to a smaller/larger memory budget. In this paper, we present a novel lightweight recommendation paradigm that allows a well-trained recommender to be customized for arbitrary device-specific memory constraints without retraining. The core idea is to compose elastic embeddings for each item, where an elastic embedding is the concatenation of a set of embedding blocks that are carefully chosen by an automated search function. Correspondingly, we propose an innovative approach, namely recommendation with universally learned elastic embeddings (RULE). To ensure the expressiveness of all candidate embedding blocks, RULE enforces a diversity-driven regularization when learning different embedding blocks. Then, a performance estimator-based evolutionary search function is designed, allowing for efficient specialization of elastic embeddings under any memory constraint for on-device recommendation. Extensive experiments on real-world datasets reveal the superior performance of RULE under tight memory budgets.) <|cite_end|>suggests training a supernet containing various embedding blocks and an evolutionary search to search for the best embedding block set given a memory budget. To reduce the computation cost for evolutionary search, they train a performance estimator to predict an estimated performance of a given embedding block set.
CIESS <|cite_start|> (Reference: Continuous Input Embedding Size Search For Recommender Systems: Latent factor models are the most popular backbones for today's recommender systems owing to their prominent performance. Latent factor models represent users and items as real-valued embedding vectors for pairwise similarity computation, and all embeddings are traditionally restricted to a uniform size that is relatively large (e.g., 256-dimensional). With the exponentially expanding user base and item catalog in contemporary e-commerce, this design is admittedly becoming memory-inefficient. To facilitate lightweight recommendation, reinforcement learning (RL) has recently opened up opportunities for identifying varying embedding sizes for different users/items. However, challenged by search efficiency and learning an optimal RL policy, existing RL-based methods are restricted to highly discrete, predefined embedding size choices. This leads to a largely overlooked potential of introducing finer granularity into embedding sizes to obtain better recommendation effectiveness under a given memory budget. In this paper, we propose continuous input embedding size search (CIESS), a novel RL-based method that operates on a continuous search space with arbitrary embedding sizes to choose from. In CIESS, we further present an innovative random walk-based exploration strategy to allow the RL policy to efficiently explore more candidate embedding sizes and converge to a better decision. CIESS is also model-agnostic and hence generalizable to a variety of latent factor RSs, whilst experiments on two real-world datasets have shown state-of-the-art performance of CIESS under different memory budgets when paired with three popular recommendation models.) <|cite_end|>applies reinforcement learning with a random walk-based exploration strategy to efficiently identify the optimal embedding size for each user and item.
BET <|cite_start|> (Reference: Budgeted Embedding Table For Recommender Systems: At the heart of contemporary recommender systems (RSs) are latent factor models that provide quality recommendation experience to users. These models use embedding vectors, which are typically of a uniform and fixed size, to represent users and items. As the number of users and items continues to grow, this design becomes inefficient and hard to scale. Recent lightweight embedding methods have enabled different users and items to have diverse embedding sizes, but are commonly subject to two major drawbacks. Firstly, they limit the embedding size search to optimizing a heuristic balancing the recommendation quality and the memory complexity, where the trade-off coefficient needs to be manually tuned for every memory budget requested. The implicitly enforced memory complexity term can even fail to cap the parameter usage, making the resultant embedding table fail to meet the memory budget strictly. Secondly, most solutions, especially reinforcement learning based ones derive and optimize the embedding size for each each user/item on an instance-by-instance basis, which impedes the search efficiency. In this paper, we propose Budgeted Embedding Table (BET), a novel method that generates table-level actions (i.e., embedding sizes for all users and items) that is guaranteed to meet pre-specified memory budgets. Furthermore, by leveraging a set-based action formulation and engaging set representation learning, we present an innovative action search strategy powered by an action fitness predictor that efficiently evaluates each table-level action. Experiments have shown state-of-the-art performance on two real-world datasets when BET is paired with three popular recommender models under different memory budgets.) <|cite_end|>leverages a non-parametric sampler to eliminate the implicit necessity of fine-tuning a coefficient trade-off between performance and storage. This approach, however, requires multiple fine-tuning iterations of the model. To address this overhead, BET introduces a parametric performance estimator.
\subsubsection{Hybrid} methods combine approaches from various categories. OptEmbed <|cite_start|> (Reference: OptEmbed: Learning Optimal Embedding Table for Click-through Rate Prediction: Learning embedding table plays a fundamental role in Click-through rate(CTR) prediction from the view of the model performance and memory usage. The embedding table is a two-dimensional tensor, with its axes indicating the number of feature values and the embedding dimension, respectively. To learn an efficient and effective embedding table, recent works either assign various embedding dimensions for feature fields and reduce the number of embeddings respectively or mask the embedding table parameters. However, all these existing works cannot get an optimal embedding table. On the one hand, various embedding dimensions still require a large amount of memory due to the vast number of features in the dataset. On the other hand, decreasing the number of embeddings usually suffers from performance degradation, which is intolerable in CTR prediction. Finally, pruning embedding parameters will lead to a sparse embedding table, which is hard to be deployed. To this end, we propose an optimal embedding table learning framework OptEmbed, which provides a practical and general method to find an optimal embedding table for various base CTR models. Specifically, we propose pruning the redundant embeddings regarding corresponding features' importance by learnable pruning thresholds. Furthermore, we consider assigning various embedding dimensions as one single candidate architecture. To efficiently search the optimal embedding dimensions, we design a uniform embedding dimension sampling scheme to equally train all candidate architectures, meaning architecture-related parameters and learnable thresholds are trained simultaneously in one supernet. We then propose an evolution search method based on the supernet to find the optimal embedding dimensions for each field. Experiments on public datasets show that OptEmbed can learn a compact embedding table which can further improve the model performance.) <|cite_end|>learns the pruning mask for embedding rows based on magnitude while training the supernet with uniform sampled masks for dimension sizes. Then, they apply an evolutionary algorithm to find the most optimal configuration and retrain the model with the found configuration. CERP <|cite_start|> (Reference: Learning Compact Compositional Embeddings via Regularized Pruning for Recommendation: Latent factor models are the dominant backbones of contemporary recommender systems (RSs) given their performance advantages, where a unique vector embedding with a fixed dimensionality (e.g., 128) is required to represent each entity (commonly a user/item). Due to the large number of users and items on e-commerce sites, the embedding table is arguably the least memory-efficient component of RSs. For any lightweight recommender that aims to efficiently scale with the growing size of users/items or to remain applicable in resource-constrained settings, existing solutions either reduce the number of embeddings needed via hashing, or sparsify the full embedding table to switch off selected embedding dimensions. However, as hash collision arises or embeddings become overly sparse, especially when adapting to a tighter memory budget, those lightweight recommenders inevitably have to compromise their accuracy. To this end, we propose a novel compact embedding framework for RSs, namely Compositional Embedding with Regularized Pruning (CERP). Specifically, CERP represents each entity by combining a pair of embeddings from two independent, substantially smaller meta-embedding tables, which are then jointly pruned via a learnable element-wise threshold. In addition, we innovatively design a regularized pruning mechanism in CERP, such that the two sparsified meta-embedding tables are encouraged to encode information that is mutually complementary. Given the compatibility with agnostic latent factor models, we pair CERP with two popular recommendation models for extensive experiments, where results on two real-world datasets under different memory budgets demonstrate its superiority against state-of-the-art baselines. The codebase of CERP is available in https://github.com/xurong-liang/CERP.) <|cite_end|>integrates soft-thresholding pruning into the compositional embedding with two balanced-size embedding tables. Thus, CERP could achieve a higher compression rate than the original compositional embedding but suffers from the complexity introduced by the pruning step.
\subsubsection{Summary} The compositional methods are more straightforward to fine-tune as they define the number of parameters at the start of the training, and the training efficiency generally is better compared to other approaches. However, they suffer from limited performance because every feature has the same memory allocation, and they also introduce more inference time overhead.
On the other hand, pruning trade-off training efficiency for a better performance <|cite_start|> (Reference: Experimental Analysis of Large-scale Learnable Vector Storage Compression: Learnable embedding vector is one of the most important applications in machine learning, and is widely used in various database-related domains. However, the high dimensionality of sparse data in recommendation tasks and the huge volume of corpus in retrieval-related tasks lead to a large memory consumption of the embedding table, which poses a great challenge to the training and deployment of models. Recent research has proposed various methods to compress the embeddings at the cost of a slight decrease in model quality or the introduction of other overheads. Nevertheless, the relative performance of these methods remains unclear. Existing experimental comparisons only cover a subset of these methods and focus on limited metrics. In this paper, we perform a comprehensive comparative analysis and experimental evaluation of embedding compression. We introduce a new taxonomy that categorizes these techniques based on their characteristics and methodologies, and further develop a modular benchmarking framework that integrates 14 representative methods. Under a uniform test environment, our benchmark fairly evaluates each approach, presents their strengths and weaknesses under different memory budgets, and recommends the best method based on the use case. In addition to providing useful guidelines, our study also uncovers the limitations of current methods and suggests potential directions for future research.) <|cite_end|>. The training pipelines are more complicated and involve multiple steps. Moreover, pruning demanded specific hardware to process sparse matrices efficiently.
NAS-based generally demands the most training resources, while having the best inference efficiency and performance.
Last but not least, hybrid methods combine the advantages of other categories to create better recommendations.
\subsection{Recommender Model Benchmark}
Rendle et al. <|cite_start|> (Reference: On the Difficulty of Evaluating Baselines: A Study on Recommender Systems: Numerical evaluations with comparisons to baselines play a central role when judging research in recommender systems. In this paper, we show that running baselines properly is difficult. We demonstrate this issue on two extensively studied datasets. First, we show that results for baselines that have been used in numerous publications over the past five years for the Movielens 10M benchmark are suboptimal. With a careful setup of a vanilla matrix factorization baseline, we are not only able to improve upon the reported results for this baseline but even outperform the reported results of any newly proposed method. Secondly, we recap the tremendous effort that was required by the community to obtain high quality results for simple methods on the Netflix Prize. Our results indicate that empirical findings in research papers are questionable unless they were obtained on standardized benchmarks where baselines have been tuned extensively by the research community.) <|cite_end|>show that well-tuned baselines could outperform newly proposed methods, which initiated a heated debate in the RSs studies. Aligning with the previous research, Maurizio et al. <|cite_start|> (Reference: Are we really making much progress? A worrying analysis of recent neural recommendation approaches: Deep learning techniques have become the method of choice for researchers working on algorithmic aspects of recommender systems. With the strongly increased interest in machine learning in general, it has, as a result, become difficult to keep track of what represents the state-of-the-art at the moment, e.g., for top-n recommendation tasks. At the same time, several recent publications point out problems in today's research practice in applied machine learning, e.g., in terms of the reproducibility of the results or the choice of the baselines when proposing new models. In this work, we report the results of a systematic analysis of algorithmic proposals for top-n recommendation tasks. Specifically, we considered 18 algorithms that were presented at top-level research conferences in the last years. Only 7 of them could be reproduced with reasonable effort. For these methods, it however turned out that 6 of them can often be outperformed with comparably simple heuristic methods, e.g., based on nearest-neighbor or graph-based techniques. The remaining one clearly outperformed the baselines but did not consistently outperform a well-tuned non-neural linear ranking method. Overall, our work sheds light on a number of potential problems in today's machine learning scholarship and calls for improved scientific practices in this area.) <|cite_end|>also indicate that simple baselines could defeat the more sophisticated deep learning models. Responding to <|cite_start|> (Reference: Are we really making much progress? A worrying analysis of recent neural recommendation approaches: Deep learning techniques have become the method of choice for researchers working on algorithmic aspects of recommender systems. With the strongly increased interest in machine learning in general, it has, as a result, become difficult to keep track of what represents the state-of-the-art at the moment, e.g., for top-n recommendation tasks. At the same time, several recent publications point out problems in today's research practice in applied machine learning, e.g., in terms of the reproducibility of the results or the choice of the baselines when proposing new models. In this work, we report the results of a systematic analysis of algorithmic proposals for top-n recommendation tasks. Specifically, we considered 18 algorithms that were presented at top-level research conferences in the last years. Only 7 of them could be reproduced with reasonable effort. For these methods, it however turned out that 6 of them can often be outperformed with comparably simple heuristic methods, e.g., based on nearest-neighbor or graph-based techniques. The remaining one clearly outperformed the baselines but did not consistently outperform a well-tuned non-neural linear ranking method. Overall, our work sheds light on a number of potential problems in today's machine learning scholarship and calls for improved scientific practices in this area.) <|cite_end|>, DaisyRec <|cite_start|> (Reference: Are We Evaluating Rigorously? Benchmarking Recommendation for Reproducible
Evaluation and Fair Comparison: With tremendous amount of recommendation algorithms proposed every year, one critical issue has attracted a considerable amount of attention: there are no effective benchmarks for evaluation, which leads to two major concerns, i.e., unreproducible evaluation and unfair comparison. This paper aims to conduct rigorous (i.e., reproducible and fair) evaluation for implicit-feedback based top-N recommendation algorithms. We first systematically review 85 recommendation papers published at eight top-tier conferences (e.g., RecSys, SIGIR) to summarize important evaluation factors, e.g., data splitting and parameter tuning strategies, etc. Through a holistic empirical study, the impacts of different factors on recommendation performance are then analyzed in-depth. Following that, we create benchmarks with standardized procedures and provide the performance of seven well-tuned state-of-the-arts across six metrics on six widely-used datasets as a reference for later study. Additionally, we release a user-friendly Python toolkit, which differs from existing ones in addressing the broad scope of rigorous evaluation for recommendation. Overall, our work sheds light on the issues in recommendation evaluation and lays the foundation for further investigation. Our code and datasets are available at GitHub (https://github.com/AmazingDD/daisyRec).) <|cite_end|> <|cite_start|> (Reference: DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation: Recently, one critical issue looms large in the field of recommender systems – there are no effective benchmarks for rigorous evaluation – which consequently leads to unreproducible evaluation and unfair comparison. We, therefore, conduct studies from the perspectives of practical theory and experiments, aiming at benchmarking recommendation for rigorous evaluation. Regarding the theoretical study, a series of hyper-factors affecting recommendation performance throughout the whole evaluation chain are systematically summarized and analyzed via an exhaustive review on 141 papers published at eight top-tier conferences within 2017-2020. We then classify them into model-independent and model-dependent hyper-factors, and different modes of rigorous evaluation are defined and discussed in-depth accordingly. For the experimental study, we release DaisyRec 2.0 library by integrating these hyper-factors to perform rigorous evaluation, whereby a holistic empirical study is conducted to unveil the impacts of different hyper-factors on recommendation performance. Supported by the theoretical and experimental studies, we finally create benchmarks for rigorous evaluation by proposing standardized procedures and providing performance of ten state-of-the-arts across six evaluation metrics on six datasets as a reference for later study. Overall, our work sheds light on the issues in recommendation evaluation, provides potential solutions for rigorous evaluation, and lays foundation for further investigation.) <|cite_end|>performs an extensive study on how various hyperparameters affect recommendation models' performance. Shehzad et al. <|cite_start|> (Reference: Everyone’s a Winner! On Hyperparameter Tuning of Recommendation Models: The performance of a recommender system algorithm in terms of common offline accuracy measures often strongly depends on the chosen hyperparameters. Therefore, when comparing algorithms in offline experiments, we can obtain reliable insights regarding the effectiveness of a newly proposed algorithm only if we compare it to a number of state-of-the-art baselines that are carefully tuned for each of the considered datasets. While this fundamental principle of any area of applied machine learning is undisputed, we find that the tuning process for the baselines in the current literature is barely documented in much of today’s published research. Ultimately, in case the baselines are actually not carefully tuned, progress may remain unclear. In this paper, we exemplify through a computational experiment involving seven recent deep learning models how every method in such an unsound comparison can be reported to be outperforming the state-of-the-art. Finally, we iterate appropriate research practices to avoid unreliable algorithm comparisons in the future.) <|cite_end|>conduct experiments to show that the worst well-fine-tuned model will outperform the best non-fine-tuned model.
With the rising concerns of reproducibility, BarsCTR <|cite_start|> (Reference: {Open benchmarking for click-through rate prediction: Click-through rate (CTR) prediction is a critical task for many applications, as its accuracy has a direct impact on user experience and platform revenue. In recent years, CTR prediction has been widely studied in both academia and industry, resulting in a wide variety of CTR prediction models. Unfortunately, there is still a lack of standardized benchmarks and uniform evaluation protocols for CTR prediction research. This leads to non-reproducible or even inconsistent experimental results among existing studies, which largely limit the practical value and potential impact of their research. In this work, we aim to perform open benchmarking for CTR prediction and present a rigorous comparison of different models in a reproducible manner. To this end, we ran over 7,000 experiments for more than 12,000 GPU hours in total to re-evaluate 24 existing models on multiple dataset settings. Surprisingly, our experiments show that with sufficient hyper-parameter search and model tuning, many deep models have smaller differences than expected. The results also reveal that making real progress on the modeling of CTR prediction is indeed a very challenging research task. We believe that our benchmarking work could not only allow researchers to gauge the effectiveness of new models conveniently but also make them fairly compare with the state of the arts. We have publicly released the benchmarking tools, evaluation protocols, and experimental settings of our work to promote reproducible research in this field.) <|cite_end|>focuses on the reproducibility of CTR models by unifying the data pre-processing logic and providing an open-source implementation for various methods. After that, Zhu et al. <|cite_start|> (Reference: {BARS:: 目的
本研究以行为活动评定量表(Behavioural Activity rating Scale,BARS)为依据,探讨精神科实施行为管理方式的临床标准。
方法
将219例新入院精神病患者随机分为两组,将其新入院1个月内总共453次行为障碍进行BARS量表评定,以BARS量表评分为依据,建议医生开具行为管理医嘱,选择行为管理方式。统计分析不同行为管理方式的使用情况,以及之后不良事件的发生情况。
结果
入组患者入院第1个月内在行为管理方式的选择上,实验组保护性约束、保护性约束+肌注镇静剂以及总的保护性约束率的使用率显著低于对照组(P< 0.05或P< 0.01);心理疏导、口服镇静剂、肌注镇静剂、总的肌注镇静剂的使用率显著高于对照组(P<0.05或P<0.01)。实验组患者每次实施行为管理后不良事件的发生率显著低于对照组(P< 0.05或P <0.01)。
结论
以BARS量表评分为依据指导临床行为管理,不仅可以改善保护性约束滥用的情况,而且可以有效降低实施行为管理后不良事件的发生。) <|cite_end|>extended previous works by working on both collaborative filtering and content-based recommendation. However, these studies exclusively examine different backbones used in recommendation models. Li et al. <|cite_start|> (Reference: Embedding Compression in Recommender Systems: A Survey: To alleviate the problem of information explosion, recommender systems are widely deployed to provide personalized information filtering services. Usually, embedding tables are employed in recommender systems to transform high-dimensional sparse one-hot vectors into dense real-valued embeddings. However, the embedding tables are huge and account for most of the parameters in industrial-scale recommender systems. In order to reduce memory costs and improve efficiency, various approaches are proposed to compress the embedding tables. In this survey, we provide a comprehensive review of embedding compression approaches in recommender systems. We first introduce deep learning recommendation models and the basic concept of embedding compression in recommender systems. Subsequently, we systematically organize existing approaches into three categories, namely low-precision, mixed-dimension, and weight-sharing, respectively. Lastly, we summarize the survey with some general suggestions and provide future prospects for this field.) <|cite_end|>provide an overview of various LERSs methods.
Yin et al. <|cite_start|> (Reference: On-Device Recommender Systems: A Comprehensive Survey: Recommender systems have been widely deployed in various real-world applications to help users identify content of interest from massive amounts of information. Traditional recommender systems work by collecting user-item interaction data in a cloud-based data center and training a centralized model to perform the recommendation service. However, such cloud-based recommender systems (CloudRSs) inevitably suffer from excessive resource consumption, response latency, as well as privacy and security risks concerning both data and models. Recently, driven by the advances in storage, communication, and computation capabilities of edge devices, there has been a shift of focus from CloudRSs to on-device recommender systems (DeviceRSs), which leverage the capabilities of edge devices to minimize centralized data storage requirements, reduce the response latency caused by communication overheads, and enhance user privacy and security by localizing data processing and model training. Despite the rapid rise of DeviceRSs, there is a clear absence of timely literature reviews that systematically introduce, categorize and contrast these methods. To bridge this gap, we aim to provide a comprehensive survey of DeviceRSs, covering three main aspects: (1) the deployment and inference of DeviceRSs (2) the training and update of DeviceRSs (3) the security and privacy of DeviceRSs. Furthermore, we provide a fine-grained and systematic taxonomy of the methods involved in each aspect, followed by a discussion regarding challenges and future research directions. This is the first comprehensive survey on DeviceRSs that covers a spectrum of tasks to fit various needs. We believe this survey will help readers effectively grasp the current research status in this field, equip them with relevant technical foundations, and stimulate new research ideas for developing DeviceRSs.) <|cite_end|>extensively explore the on-device settings for RSs, including inference, training, and security concerns.
Zhang et al. <|cite_start|> (Reference: Experimental Analysis of Large-scale Learnable Vector Storage Compression: Learnable embedding vector is one of the most important applications in machine learning, and is widely used in various database-related domains. However, the high dimensionality of sparse data in recommendation tasks and the huge volume of corpus in retrieval-related tasks lead to a large memory consumption of the embedding table, which poses a great challenge to the training and deployment of models. Recent research has proposed various methods to compress the embeddings at the cost of a slight decrease in model quality or the introduction of other overheads. Nevertheless, the relative performance of these methods remains unclear. Existing experimental comparisons only cover a subset of these methods and focus on limited metrics. In this paper, we perform a comprehensive comparative analysis and experimental evaluation of embedding compression. We introduce a new taxonomy that categorizes these techniques based on their characteristics and methodologies, and further develop a modular benchmarking framework that integrates 14 representative methods. Under a uniform test environment, our benchmark fairly evaluates each approach, presents their strengths and weaknesses under different memory budgets, and recommends the best method based on the use case. In addition to providing useful guidelines, our study also uncovers the limitations of current methods and suggests potential directions for future research.) <|cite_end|>share similarities with our work, studying various embedding compression methods, albeit with the limited scope of recommendation task (CTR prediction only) and minimal hyperparameter fine-tuning. On the other hand, our work provides a more extensive hyperparameter tuning by adapting the methodology from <|cite_start|> (Reference: DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation: Recently, one critical issue looms large in the field of recommender systems – there are no effective benchmarks for rigorous evaluation – which consequently leads to unreproducible evaluation and unfair comparison. We, therefore, conduct studies from the perspectives of practical theory and experiments, aiming at benchmarking recommendation for rigorous evaluation. Regarding the theoretical study, a series of hyper-factors affecting recommendation performance throughout the whole evaluation chain are systematically summarized and analyzed via an exhaustive review on 141 papers published at eight top-tier conferences within 2017-2020. We then classify them into model-independent and model-dependent hyper-factors, and different modes of rigorous evaluation are defined and discussed in-depth accordingly. For the experimental study, we release DaisyRec 2.0 library by integrating these hyper-factors to perform rigorous evaluation, whereby a holistic empirical study is conducted to unveil the impacts of different hyper-factors on recommendation performance. Supported by the theoretical and experimental studies, we finally create benchmarks for rigorous evaluation by proposing standardized procedures and providing performance of ten state-of-the-arts across six evaluation metrics on six datasets as a reference for later study. Overall, our work sheds light on the issues in recommendation evaluation, provides potential solutions for rigorous evaluation, and lays foundation for further investigation.) <|cite_end|>and studies LERSs' transferability onto collaborative filtering tasks. <|paper_end|> | [
"<|reference_start|> On-Device Next-Item Recommendation with Self-Supervised Knowledge Distillation: Modern recommender systems operate in a fully server-based fashion. To cater to millions of users, the frequent model maintaining and the high-speed processing for concurrent user requests are required, which comes at the cost of a huge carbon footprint. Meanwhile, users need to upload their behavior data even including the immediate environmental context to the server, raising the public concern about privacy. On-device recommender systems circumvent these two issues with cost-conscious settings and local inference. However, due to the limited memory and computing resources, on-device recommender systems are confronted with two fundamental challenges: (1) how to reduce the size of regular models to fit edge devices? (2) how to retain the original capacity? Previous research mostly adopts tensor decomposition techniques to compress the regular recommendation model with limited compression ratio so as to avoid drastic performance degradation. In this paper, we explore ultra-compact models for next-item recommendation, by loosing the constraint of dimensionality consistency in tensor decomposition. Meanwhile, to compensate for the capacity loss caused by compression, we develop a self-supervised knowledge distillation framework which enables the compressed model (student) to distill the essential information lying in the raw data, and improves the long-tail item recommendation through an embedding-recombination strategy with the original model (teacher). The extensive experiments on two benchmarks demonstrate that, with 30x model size reduction, the compressed model almost comes with no accuracy loss, and even outperforms its uncompressed counterpart in most cases. <|reference_end|>",
"<|reference_start|> On the Difficulty of Evaluating Baselines: A Study on Recommender Systems: Numerical evaluations with comparisons to baselines play a central role when judging research in recommender systems. In this paper, we show that running baselines properly is difficult. We demonstrate this issue on two extensively studied datasets. First, we show that results for baselines that have been used in numerous publications over the past five years for the Movielens 10M benchmark are suboptimal. With a careful setup of a vanilla matrix factorization baseline, we are not only able to improve upon the reported results for this baseline but even outperform the reported results of any newly proposed method. Secondly, we recap the tremendous effort that was required by the community to obtain high quality results for simple methods on the Netflix Prize. Our results indicate that empirical findings in research papers are questionable unless they were obtained on standardized benchmarks where baselines have been tuned extensively by the research community. <|reference_end|>",
"<|reference_start|> On-Device Recommender Systems: A Comprehensive Survey: Recommender systems have been widely deployed in various real-world applications to help users identify content of interest from massive amounts of information. Traditional recommender systems work by collecting user-item interaction data in a cloud-based data center and training a centralized model to perform the recommendation service. However, such cloud-based recommender systems (CloudRSs) inevitably suffer from excessive resource consumption, response latency, as well as privacy and security risks concerning both data and models. Recently, driven by the advances in storage, communication, and computation capabilities of edge devices, there has been a shift of focus from CloudRSs to on-device recommender systems (DeviceRSs), which leverage the capabilities of edge devices to minimize centralized data storage requirements, reduce the response latency caused by communication overheads, and enhance user privacy and security by localizing data processing and model training. Despite the rapid rise of DeviceRSs, there is a clear absence of timely literature reviews that systematically introduce, categorize and contrast these methods. To bridge this gap, we aim to provide a comprehensive survey of DeviceRSs, covering three main aspects: (1) the deployment and inference of DeviceRSs (2) the training and update of DeviceRSs (3) the security and privacy of DeviceRSs. Furthermore, we provide a fine-grained and systematic taxonomy of the methods involved in each aspect, followed by a discussion regarding challenges and future research directions. This is the first comprehensive survey on DeviceRSs that covers a spectrum of tasks to fit various needs. We believe this survey will help readers effectively grasp the current research status in this field, equip them with relevant technical foundations, and stimulate new research ideas for developing DeviceRSs. <|reference_end|>",
"<|reference_start|> Experimental Analysis of Large-scale Learnable Vector Storage Compression: Learnable embedding vector is one of the most important applications in machine learning, and is widely used in various database-related domains. However, the high dimensionality of sparse data in recommendation tasks and the huge volume of corpus in retrieval-related tasks lead to a large memory consumption of the embedding table, which poses a great challenge to the training and deployment of models. Recent research has proposed various methods to compress the embeddings at the cost of a slight decrease in model quality or the introduction of other overheads. Nevertheless, the relative performance of these methods remains unclear. Existing experimental comparisons only cover a subset of these methods and focus on limited metrics. In this paper, we perform a comprehensive comparative analysis and experimental evaluation of embedding compression. We introduce a new taxonomy that categorizes these techniques based on their characteristics and methodologies, and further develop a modular benchmarking framework that integrates 14 representative methods. Under a uniform test environment, our benchmark fairly evaluates each approach, presents their strengths and weaknesses under different memory budgets, and recommends the best method based on the use case. In addition to providing useful guidelines, our study also uncovers the limitations of current methods and suggests potential directions for future research. <|reference_end|>"
] | [
7,
26,
35,
36
] | {"<|cite_1|>": "arxiv-406719", "<|cite_2|>": "ss-1254435", "<|cite_3|>": "arxiv-562019", "<|cite_4|>": "arxiv-246668", "<|cite_5|>": "ss-1254435", "<|cite_7|>": "arxiv-333660", "<|multi_cite_8_1|>": "arxiv-495527", "<|multi_cite_8_2|>": "ss-961206", "<|multi_cite_9_1|>": "arxiv-406719", "<|multi_cite_9_2|>": "arxiv-476726", "<|multi_cite_10_1|>": "arxiv-439093", "<|multi_cite_10_2|>": "arxiv-317599", "<|multi_cite_11_1|>": "arxiv-189227", "<|multi_cite_11_2|>": "arxiv-209387", "<|multi_cite_12_1|>": "arxiv-345627", "<|multi_cite_12_2|>": "ss-1368112", "<|multi_cite_12_3|>": "arxiv-577401", "<|cite_13|>": "ss-1368113", "<|cite_14|>": "ss-686033", "<|cite_15|>": "arxiv-537308", "<|multi_cite_16_1|>": "arxiv-221966", "<|multi_cite_16_2|>": "arxiv-317599", "<|cite_17|>": "arxiv-318223", "<|cite_18|>": "arxiv-315890", "<|multi_cite_19_1|>": "arxiv-345627", "<|multi_cite_19_2|>": "arxiv-213767", "<|multi_cite_20_1|>": "arxiv-439093", "<|multi_cite_20_2|>": "arxiv-537308", "<|cite_21|>": "arxiv-297872", "<|cite_22|>": "arxiv-132115", "<|cite_23|>": "ss-1104666", "<|cite_24|>": "arxiv-91586", "<|cite_25|>": "arxiv-537308", "<|cite_26|>": "arxiv-85986", "<|cite_27|>": "ss-1368114", "<|cite_28|>": "arxiv-246668", "<|cite_29|>": "arxiv-315890", "<|cite_30|>": "arxiv-439093", "<|cite_31|>": "arxiv-315890", "<|cite_32|>": "arxiv-439093", "<|cite_33|>": "arxiv-315890", "<|cite_34|>": "arxiv-439093", "<|multi_cite_35_1|>": "ss-1178717", "<|multi_cite_35_2|>": "arxiv-562019", "<|multi_cite_36_1|>": "arxiv-315890", "<|multi_cite_36_2|>": "arxiv-439093", "<|multi_cite_36_3|>": "arxiv-221966", "<|multi_cite_37_1|>": "arxiv-297872", "<|multi_cite_37_2|>": "arxiv-495322", "<|multi_cite_38_1|>": "arxiv-150725", "<|multi_cite_38_2|>": "arxiv-421208", "<|cite_39|>": "arxiv-317599", "<|cite_40|>": "arxiv-315890", "<|cite_41|>": "arxiv-221966", "<|cite_42|>": "arxiv-406719", "<|cite_43|>": "arxiv-317599", "<|cite_44|>": "arxiv-415053", "<|cite_45|>": "arxiv-297872", "<|multi_cite_46_1|>": "ss-816205", "<|multi_cite_46_2|>": "arxiv-79036", "<|cite_47|>": "arxiv-248687", "<|cite_48|>": "arxiv-315890", "<|cite_49|>": "arxiv-151068", "<|cite_50|>": "arxiv-411513", "<|cite_51|>": "arxiv-582106", "<|cite_52|>": "arxiv-213767", "<|cite_54|>": "arxiv-163588", "<|cite_55|>": "ss-1235970", "<|cite_56|>": "arxiv-109304", "<|cite_58|>": "arxiv-345627", "<|cite_59|>": "arxiv-495322", "<|cite_60|>": "arxiv-551705", "<|cite_61|>": "arxiv-439093", "<|cite_62|>": "arxiv-537308", "<|cite_63|>": "arxiv-562019", "<|cite_64|>": "arxiv-202678", "<|cite_65|>": "ss-1531008", "<|cite_66|>": "ss-1531008", "<|multi_cite_67_1|>": "ss-1264093", "<|multi_cite_67_2|>": "ss-677560", "<|cite_68|>": "ss-1368115", "<|cite_69|>": "ss-1178717", "<|cite_70|>": "ss-1368116", "<|cite_71|>": "arxiv-645603", "<|cite_72|>": "arxiv-577401", "<|cite_73|>": "arxiv-562019", "<|cite_74|>": "ss-677560"} |
2003.06739 | <|paper_start|> Title: Asymptotic Network Independence and Step-Size for A Distributed Subgradient Method
Abstract: Asymptotic Network Independence and Step-Size for A Distributed Subgradient Method: We consider whether distributed subgradient methods can achieve a linear speedup over a centralized subgradient method. While it might be hoped that distributed network of $n$ nodes that can compute $n$ times more subgradients in parallel compared to a single node might, as a result, be $n$ times faster, existing bounds for distributed optimization methods are often consistent with a slowdown rather than speedup compared to a single node. We show that a distributed subgradient method has this "linear speedup" property when using a class of square-summable-but-not-summable step-sizes which include $1/t^{\beta}$ when $\beta \in (1/2,1)$; for such step-sizes, we show that after a transient period whose size depends on the spectral gap of the network, the method achieves a performance guarantee that does not depend on the network or the number of nodes. We also show that the same method can fail to have this "asymptotic network independence" property under the optimally decaying step-size $1/\sqrt{t}$ and, as a consequence, can fail to provide a linear speedup compared to a single node with $1/\sqrt{t}$ step-size.
Introduction
We consider the standard setting of distributed convex optimization: $f_1(x), \ldots, f_n(x)$ are convex functions from $\R^d$ to $\R$, with node $i$ of the network being the only node which can compute subgradients of the function $f_i(x)$. The goal is to compute a minimizer \begin{equation} \label{eq:mainprob} x^* \in \arg \min_{x \in \Omega} F(x),\end{equation} where $$ F(x) := \frac{1}{n} \sum_{i=1}^n f_i(x),$$ and $\Omega$ is a closed convex set. The underlying method must be decentralized, relying only on local subgradient computations and peer-to-peer message exchanges in a certain graph $G$. In particular, we will consider the ``standard model'' of distributed optimization where at each step, node $i$ computes a subgradient of its local function, possibly performs a projection step onto the set $\Omega$, and broadcasts a message to its neighbors.
This problem setup is a natural model for machine learning over a network of processors. Minimizing the function $F(x)$ typically comes from empirical loss minimization. The function $F(x)$ typicaly measures how well a model parametrized by the vector $x$ can fit a collection of data points; distributing the data points among $n$ processors will result in the problem formulation of Eq. (\ref{eq:mainprob}).
A variation on this setup considers the situation when the underlying graph $G$ is taken to be the star graph, sometimes called ``local gradient descent'' (see <|cite_start|> (Reference: Local SGD Converges Fast and Communicates Little: Mini-batch stochastic gradient descent (SGD) is state of the art in large scale distributed training. The scheme can reach a linear speedup with respect to the number of workers, but this is rarely seen in practice as the scheme often suffers from large network delays and bandwidth limits. To overcome this communication bottleneck recent works propose to reduce the communication frequency. An algorithm of this type is local SGD that runs SGD independently in parallel on different workers and averages the sequences only once in a while. This scheme shows promising results in practice, but eluded thorough theoretical analysis. We prove concise convergence rates for local SGD on convex problems and show that it converges at the same rate as mini-batch SGD in terms of number of evaluated gradients, that is, the scheme achieves linear speedup in the number of workers and mini-batch size. The number of communication rounds can be reduced up to a factor of T^{1/2}---where T denotes the number of total steps---compared to mini-batch SGD. This also holds for asynchronous implementations. Local SGD can also be used for large scale training of deep learning models. The results shown here aim serving as a guideline to further explore the theoretical and practical aspects of local SGD in these applications.) <|cite_end|>). The advantage of the using the star graph is that one can design simple protocols involving rounds of interaction between the center and the leaf nodes which are not available in the setting where $G$ is an arbitrary graph. However, a disadvantage of using the star graph is that, as the number of nodes gets large, the number of bits that need to be transmitted to the center increases as well (see \cite{} which consider gradient compression to overcome this). One way to avoid this problem is to consider optimization over arbitrary graphs $G$ instead, as we do in this paper.
This problem formulation is now classical; it first analyzed in <|cite_start|> (Reference: {Distributed subgradient methods for multi-agent optimization: We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.) <|cite_end|>, where a distributed subgradient method was proposed for the unconstrained case when $\Omega=\R^n$. The case with the constraint $\Omega$ was first analyzed in <|cite_start|> (Reference: Constrained Consensus and Optimization in Multi-Agent Networks: We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimates of each agent are restricted to lie in different convex sets. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed "projected consensus algorithm" in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed "projected subgradient algorithm" which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.) <|cite_end|>. Both papers proposed methods inspired by the ``average consensus'' literature, where nodes mix subgradient steps on their local functions with linear combinations of their neighbors iterates.
Distributed optimization methods have attracted considerable attention since the publication of <|cite_start|> (Reference: {Distributed subgradient methods for multi-agent optimization: We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.) <|cite_end|> for several reasons. First, it is hoped that distributed empirical loss minimization in machine learning could result in faster training. Second, many problems in control and signal processing among network of nodes involve nodes acting to maximize a global objective from local information, and Eq. (\ref{eq:mainprob}) is thought to be among the simplest problems of this type. Over the past decade, thousands of papers have been written on different variations of this problem, and it would be impossible to survey all this related work; instead, we refer the reader to the recent survey <|cite_start|> (Reference: Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization: In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions. Algorithms interleave local computations with communication among all or a subset of the nodes. Motivated by a variety of applications---distributed estimation in sensor networks, fitting models to massive data sets, and distributed control of multi-robot systems, to name a few---significant advances have been made towards the development of robust, practical algorithms with theoretical performance guarantees. This paper presents an overview of recent work in this area. In general, rates of convergence depend not only on the number of nodes involved and the desired level of accuracy, but also on the structure and nature of the network over which nodes communicate (e.g., whether links are directed or undirected, static or time-varying). We survey the state-of-the-art algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.) <|cite_end|>.
We next launch into a discussion of the main motivating concern of this paper, namely how the performance of distributed optimization methods compares to their centralized counterparts. We begin by discussing the available guarantees for the (centralized) subgradient method, so that we can can contrast those guarantees to the available distributed bounds in our survey of previous work, which will follow.
\subsection{The subgradient method}
The projected subgradient method run on the function $F(x)$ takes the form
\[ y(t+1) = P_{\Omega} \left[ y(t) - \alpha(t) g_F(t) \right], \] where $g_F(t)$ is a subgradient of the function $F(\cdot)$ at $y(t)$, and $P_{\Omega}$ is the projection onto $\Omega$.
The standard reference for an analysis of this method is the set of lecture notes <|cite_start|> (Reference: Subgradient Methods: 3 Convergence proof 4 3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.2 Some basic inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.3 A bound on the suboptimality bound . . . . . . . . . . . . . . . . . . . . . . 7 3.4 A stopping criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.5 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8) <|cite_end|>. It is usually assumed that $||g_F(t)||_2 \leq L$ for all $t$, i.e., all subgradients are bounded; and $\Omega$ is assumed to have diameter at most $D$. The function $F(x)$ may have more than one minimizer over $\Omega$; we select one minimizer arbitrarily and call it $x^*$.
The step-size $\alpha(t)$ needs to be properly chosen. There are two choices that are typically analyzed in this setting. One is to set $\alpha(t)=1/\sqrt{t}$, which turns out to be the optimal decay rate. The other is to choose $\alpha(t)$ to be ``square summable but not summable'' as in the following assumption.
\begin{assumption} \label{ass:sum} The sequence $\alpha(t)$ satisfies
\begin{eqnarray*} \sum_{t=1}^{+\infty} \alpha^2(t) & < & \infty \\ \sum_{t=1}^{+\infty} \alpha(t) & = & +\infty
\end{eqnarray*}
\end{assumption}
We now briefly summarize the standard analysis of the method from <|cite_start|> (Reference: Subgradient Methods: 3 Convergence proof 4 3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.2 Some basic inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.3 A bound on the suboptimality bound . . . . . . . . . . . . . . . . . . . . . . 7 3.4 A stopping criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.5 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8) <|cite_end|>, which the reader can consult for details. The analysis is based on the following recurrence relation, to the effect that, up to second order terms, the method gets closer to the set of minimizers at every step: \begin{small}
\[ ||y(k+1) - x^*||_2^2 \leq ||y(k) - x^*||_2^2 - 2 \alpha(k) (F(y(k)) - F^*) \nonumber + L^2 \alpha^2(k), \] \end{small}
It is standard to re-arrange this into a telescoping sum as
\begin{align} 2 \alpha(k) (F(y(k)) - F^*) \leq & ||y(k) - x^*||_2^2 - ||y(k+1) - x^*||_2^2 + L^2 \alpha^2(k), \label{eq:recurr}
\end{align} and then sum it up over $k=1, \ldots, t$. Indeed, defining
\[ y_{\alpha}(t) :=\frac{ \sum_{k=1}^t \alpha(k) y(k)}{\sum_{k=1}^t \alpha(k)},
\] and summing up Eq. (\ref{eq:recurr}) and then appealing to the convexity of $F(x)$ we can obtain that
\begin{equation} \label{eq:firstsubbound} F(y_{\alpha}(t)) - F^*
\leq \frac{D^2 + L^2 \sum_{k=1}^t \alpha^2(k)}{2\sum_{k=1}^t \alpha(k) },
\end{equation} where $||y(0)-x^*||_2^2 \leq D^2$ as $\Omega$ was assumed to have diameter $D$. Finally, by Assumption \ref{ass:sum}, the right-hand side goes to zero, and so we obtain that the subgradient method works.
A variation on this argument can get rid of the dependence on $L$ in Eq. (\ref{eq:firstsubbound}). This requires the following assumption.
\begin{assumption} \label{ass:firstc}
There is a constant $C_{\alpha}$ such that for all positive integers $t$,
\[ \sum_{k=1}^{t} \alpha(k) \leq C_{\alpha} \sum_{k=\lceil t/2 \rceil}^t \alpha(k). \]
\end{assumption}
This assumption can be motivated by observing that it is satisfied by step-sizes that decay polynomially as $\alpha(t)=1/t^{\beta}$ when $\beta>0$.
With this assumption in place, one can set $t' = \lceil t/2 \rceil$ and instead sum Eq. (\ref{eq:recurr}) from $t'$ to $t$. Defining the running average from time $t'$ to $t$ as
\[ y_{\alpha}'(t) :=\frac{ \sum_{k=t'}^t \alpha(k) y(k)}{\sum_{k=t'}^t \alpha(k)}
\] this immediately yields the following proposition.
\begin{proposition} \label{prop:centsub} Suppose Assumptions \ref{ass:sum} and \ref{ass:firstc} on the step-size are satisfied, $F(x)$ is a convex functions whose subgradients are upper bounded by $L$ in the Euclidean norm, and $t$ is large enough so that we have the upper bound
\begin{equation} \label{eq:tlower1} \sum_{k = \lfloor t/2 \rfloor}^{+\infty} \alpha^2(k) \leq \frac{D^2}{L^2}.
\end{equation} Then
\[ F(y_{\alpha}'(t)) - F^* \leq
\frac{D^2 C_{\alpha}}{\sum_{k=1}^t \alpha(k)}.
\] \label{prop:subgr}
\end{proposition}
This result has no dependence on $L$, but at the expense of multiplying the dependence on $D$ by the constant $C_{\alpha}$. For example, if $\alpha(t)=1/t^{3/4}$, it is an exercise to verify that one can take $C_{\alpha}=6$. We note that since the step-size $\alpha^2(t)$ is square summable, Eq. (\ref{eq:tlower1}) is guaranteed to hold for large enough $t$.
The bound of this proposition suggests to take $\alpha(t)$ decaying as slowly as possible (so that $\sum_{k=1}^t \alpha(k)$ grows as fast as possible) while still keeping $\alpha(t)$ square summable but not summable. There is no optimal choice, but in general one wants to pick $\alpha(t)=1/t^{\beta}$ where $\beta$ is close to $1/2$, but not $1/2$ since $\alpha(t)=1/\sqrt{t}$ is not square summable. The result will be a decay rate of $F(y_{\alpha}'(t)) - F^* = O(1/t^{1-\beta})$.
One can redo the above argument with the rate of decay of $\alpha(t)=1/\sqrt{t}$ to obtain an optimal rate of decay. In that case, because this is not a square summable step-size, the dependence on $L$ cannot be avoided. However, since $\sum_{k=t'}^t 1/t = O(1)$, we can simply repeat all the steps above to give the bound
\begin{equation} \label{eq:normalsub} F(y_{\alpha}(t)) - F^*
\leq O \left( \frac{D^2 + L^2}{\sqrt{t} } \right),
\end{equation} One can also choose $\alpha(t)$ depending on the constants $D$ and $L$ to obtain better scaling with respect to those constants; however, in this paper, for simplicity we restrict our attention to unoptimized step-sizes of the form $\alpha(t)=1/t^{\beta}$.
We next compare these results for the centralized subgradient to available convergence times in the distributed case.
\subsection{Convergence times of distributed subgradient methods}
A number of distributed subgradient methods have been proposed in the literature, with the simplest being
\begin{equation} \label{eq:no} x(t+1) = W x(t) - \alpha(t) g(t), \end{equation} which was analyzed in <|cite_start|> (Reference: {Distributed subgradient methods for multi-agent optimization: We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.) <|cite_end|>. Here $x(t)$ is an $n \times d$ matrix, with the $i$'th row of $x(t)$ being controlled by agent $i$; we will use $x_i(t)$ to denote the same $i$'th row. The matrix $g(t)$ is also $n \times d$ and it's $i$'th row, which we will denote by $g_i(t)$, is a subgradient of the function $f_i(x)$ at $x=x_i(t)$. The matrix $W$ is doubly stochastic and needs to satisfy some connectivity and non-aperiodicity conditions; it suffices to assume that $W$ has positive diagonal and that the directed graph corresponding to the positive entries of $W$ is strongly connected.
It was shown in <|cite_start|> (Reference: {Distributed subgradient methods for multi-agent optimization: We study a distributed computation model for optimizing a sum of convex objective functions corresponding to multiple agents. For solving this (not necessarily smooth) optimization problem, we consider a subgradient method that is distributed among the agents. The method involves every agent minimizing his/her own objective function while exchanging information locally with other agents in the network over a time-varying topology. We provide convergence results and convergence rate estimates for the subgradient method. Our convergence rate results explicitly characterize the tradeoff between a desired accuracy of the generated approximate optimal solutions and the number of iterations needed to achieve the accuracy.) <|cite_end|> that, for small enough constant stepsize $\alpha(t)=\alpha$, this method results in a final error that scales linearly in $\alpha$. The projected version
\begin{equation} \label{eq:nop} x(t+1) = P_{\Omega} \left[ W x(t) - \alpha(t) g'(t) \right], \end{equation} was studied in <|cite_start|> (Reference: Constrained Consensus and Optimization in Multi-Agent Networks: We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimates of each agent are restricted to lie in different convex sets. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed "projected consensus algorithm" in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed "projected subgradient algorithm" which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.) <|cite_end|>; here the projection operator $P_{\Omega}$ acts on each row of the matrix and $g'(t)$ is composed of subgradients evaluated at $Wx(t)$. It was shown that, under an appropriately decaying step-size, this scheme results convergence to an optimal solution.
Our interest is in the convergence rate of these methods; in particular, we want to see if the parallelization inherent in having $n$ nodes query subgradients at the same time helps convergence. A useful benchmark is the consider a single node, which knows all the functions $f_i(x), i = 1, \ldots, n$, and can compute the gradient of one of these functions at every time step. We will call the rate obtained in this setup by performing full-batch subgradient descent (i.e., by computing the gradient of $F(x)$ by querying the subgradients of $f_1(x), \ldots, f_n(x)$ in $n$ steps) the {\em single-node rate}. The single node rate consists in multiplying all the rates obtained in the previous section by $n$, consistent with $n$ steps to compute a single subgradient of $F(x)$. For example, the bound of Proposition \ref{prop:subgr} becomes
\[ F(y_{\alpha}'(t)) - F^* \leq
\frac{n D^2 C_{\alpha}}{\sum_{k=1}^t \alpha(k)}. \]
Ideally, one hopes for a factor $n$ speedup over the single note rate, since the $n$-node network can compute $n$ subgradients in parallel at every step. This corresponds to a convergence time that removes the factor of $n$ from the last equation.
Most of the existing convergence analyses do not attempt to write out all the scalings for the convergence times of distributed optimization methods; many papers write out the scaling with $t$ but do not focus on scaling with the number of nodes. Unfortunately, once those scalings are traced out within the course of the proof, they tend to scale with $(1-\sigma)^{-1}$, where $\sigma$ is the second-largest singular value associated with the matrix $W$. The quantity $(1-\sigma)^{-1}$ can scale as much as $O(n^2)$ in the worst-case over all graphs (se e <|cite_start|> (Reference: Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization: In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions. Algorithms interleave local computations with communication among all or a subset of the nodes. Motivated by a variety of applications---distributed estimation in sensor networks, fitting models to massive data sets, and distributed control of multi-robot systems, to name a few---significant advances have been made towards the development of robust, practical algorithms with theoretical performance guarantees. This paper presents an overview of recent work in this area. In general, rates of convergence depend not only on the number of nodes involved and the desired level of accuracy, but also on the structure and nature of the network over which nodes communicate (e.g., whether links are directed or undirected, static or time-varying). We survey the state-of-the-art algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.) <|cite_end|>), so the underlying scaling could actually be worse than the single-node rate.
A concrete example of this comes from the survey paper <|cite_start|> (Reference: Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization: In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions. Algorithms interleave local computations with communication among all or a subset of the nodes. Motivated by a variety of applications---distributed estimation in sensor networks, fitting models to massive data sets, and distributed control of multi-robot systems, to name a few---significant advances have been made towards the development of robust, practical algorithms with theoretical performance guarantees. This paper presents an overview of recent work in this area. In general, rates of convergence depend not only on the number of nodes involved and the desired level of accuracy, but also on the structure and nature of the network over which nodes communicate (e.g., whether links are directed or undirected, static or time-varying). We survey the state-of-the-art algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.) <|cite_end|>, where a worse case rate is explicitly written out. The unconstrained case is studied, with step-size $\alpha=1/\sqrt{T}$ and the algorithm is run for $T$ steps. It is shown in <|cite_start|> (Reference: Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization: In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions. Algorithms interleave local computations with communication among all or a subset of the nodes. Motivated by a variety of applications---distributed estimation in sensor networks, fitting models to massive data sets, and distributed control of multi-robot systems, to name a few---significant advances have been made towards the development of robust, practical algorithms with theoretical performance guarantees. This paper presents an overview of recent work in this area. In general, rates of convergence depend not only on the number of nodes involved and the desired level of accuracy, but also on the structure and nature of the network over which nodes communicate (e.g., whether links are directed or undirected, static or time-varying). We survey the state-of-the-art algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology.) <|cite_end|> that
\begin{equation} \label{eq:prevscaling} F(y_{\alpha}(t)) - F^* \leq O \left( \frac{D^2 + L^2 (1-\sigma)^{-1}}{\sqrt{T}} \right) \end{equation} Comparing this with Eq. (\ref{eq:normalsub}), we see that, in the worst case when $(1-\sigma)^{-1} \approx \Theta(n^2)$, this is a factor of $n$ slower than the single node rate -- in spite of the fact that the network can compute $n$ gradients in parallel. Similar issues affect all the upper bounds in this setting that have been derived in the previous literature, in particular the bounds derived in <|cite_start|> (Reference: Dual Averaging for Distributed Optimization: Convergence Analysis and Network Scaling: The goal of decentralized optimization over a network is to optimize a global objective formed by a sum of local (possibly nonsmooth) convex functions using only local computation and communication. It arises in various application domains, including distributed tracking and localization, multi-agent co-ordination, estimation in sensor networks, and large-scale optimization in machine learning. We develop and analyze distributed algorithms based on dual averaging of subgradients, and we provide sharp bounds on their convergence rates as a function of the network size and topology. Our method of analysis allows for a clear separation between the convergence of the optimization algorithm itself and the effects of communication constraints arising from the network structure. In particular, we show that the number of iterations required by our algorithm scales inversely in the spectral gap of the network. The sharpness of this prediction is confirmed both by theoretical lower bounds and simulations for various networks. Our approach includes both the cases of deterministic optimization and communication, as well as problems with stochastic optimization and/or communication.) <|cite_end|> for dual subgradient, in <|cite_start|> (Reference: Optimal Algorithms for Non-Smooth Distributed Optimization in Networks: In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.) <|cite_end|> (for the standard setting of distributed optimization where a single message exchange in neighbors is possible per step; <|cite_start|> (Reference: Optimal Algorithms for Non-Smooth Distributed Optimization in Networks: In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.) <|cite_end|> also explored methods where potentially $(1-\sigma)^{-1/2}$ steps of gossip are possible per step, in which case the dependence on $1-\sigma$ for the number of subgradients computed can be removed), and those implicit in <|cite_start|> (Reference: Constrained Consensus and Optimization in Multi-Agent Networks: We present distributed algorithms that can be used by multiple agents to align their estimates with a particular value over a network with time-varying connectivity. Our framework is general in that this value can represent a consensus value among multiple agents or an optimal solution of an optimization problem, where the global objective function is a combination of local agent objective functions. Our main focus is on constrained problems where the estimates of each agent are restricted to lie in different convex sets. To highlight the effects of constraints, we first consider a constrained consensus problem and present a distributed "projected consensus algorithm" in which agents combine their local averaging operation with projection on their individual constraint sets. This algorithm can be viewed as a version of an alternating projection method with weights that are varying over time and across agents. We establish convergence and convergence rate results for the projected consensus algorithm. We next study a constrained optimization problem for optimizing the sum of local objective functions of the agents subject to the intersection of their local constraint sets. We present a distributed "projected subgradient algorithm" which involves each agent performing a local averaging operation, taking a subgradient step to minimize its own objective function, and projecting on its constraint set. We show that, with an appropriately selected stepsize rule, the agent estimates generated by this algorithm converge to the same optimal solution for the cases when the weights are constant and equal, and when the weights are time-varying but all agents have the same constraint set.) <|cite_end|> for square-summable-but-not-summable step-sizes.
In the paper <|cite_start|> (Reference: {Linear Time Average Consensus and Distributed Optimization on Fixed Graphs: We describe a protocol for the average consensus problem on any fixed undirected graph whose convergence time scales linearly in the total number nodes $n$. The protocol relies only on nearest-neighbor interactions but requires all the nodes to know the same upper bound $U$ on the total number of nodes which is correct within a constant multiplicative factor. As an application, we develop a distributed protocol for minimizing an average of (possibly nondifferentiable) convex functions $(1/n) \sum_{i=1}^n f_i(\theta)$ in the setting where only node $i$ in an undirected, connected graph knows the function $f_i(\theta)$. Under the same assumption about all nodes knowing $U$, and additionally assuming that the subgradients of each $f_i(\theta)$ have absolute values bounded by some constant $L$ known to the nodes, we show that after $T$ iterations our protocol has error which is $O(L \sqrt{n/T})$. As a consequence, the time to solve this distributed optimization problem to any fixed accuracy is also linear in ...) <|cite_end|>, it was shown how, for a particular way to choose the matrix $W$ in a distributed way, it is possible to replace the $(1-\sigma)^{-1}$ with an $O(n)$ factor, matching the single-node rate. The idea was to use Nesterov acceleration, which allows us replace $(1-\sigma)^{-1}$ with $(1-\sigma)^{-1/2}$, and argue that for a certain particularly chosen set of weights the latter quantity is $O(n)$. However, this required slightly stronger assumptions (namely, knowing either the total number of nodes or a reasonably accurate upper bound on it). A similar idea was explored in <|cite_start|> (Reference: Optimal Algorithms for Non-Smooth Distributed Optimization in Networks: In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension.) <|cite_end|>. While this does not offer a speedup over the single-node rate, at least it matches it.
To attain a linear speedup over the single node rate, we would need to remove $(1-\sigma)^{-1}$ from the numerator in the scaling above. In this paper, we explore when this can be done and when it cannot. <|paper_end|> | [
"<|reference_start|> Subgradient Methods: 3 Convergence proof 4 3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.2 Some basic inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.3 A bound on the suboptimality bound . . . . . . . . . . . . . . . . . . . . . . 7 3.4 A stopping criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.5 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 <|reference_end|>",
"<|reference_start|> Subgradient Methods: 3 Convergence proof 4 3.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 3.2 Some basic inequalities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 3.3 A bound on the suboptimality bound . . . . . . . . . . . . . . . . . . . . . . 7 3.4 A stopping criterion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.5 Numerical example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 <|reference_end|>",
"<|reference_start|> Network Topology and Communication-Computation Tradeoffs in Decentralized Optimization: In decentralized optimization, nodes cooperate to minimize an overall objective function that is the sum (or average) of per-node private objective functions. Algorithms interleave local computations with communication among all or a subset of the nodes. Motivated by a variety of applications---distributed estimation in sensor networks, fitting models to massive data sets, and distributed control of multi-robot systems, to name a few---significant advances have been made towards the development of robust, practical algorithms with theoretical performance guarantees. This paper presents an overview of recent work in this area. In general, rates of convergence depend not only on the number of nodes involved and the desired level of accuracy, but also on the structure and nature of the network over which nodes communicate (e.g., whether links are directed or undirected, static or time-varying). We survey the state-of-the-art algorithms and their analyses tailored to these different scenarios, highlighting the role of the network topology. <|reference_end|>",
"<|reference_start|> Optimal Algorithms for Non-Smooth Distributed Optimization in Networks: In this work, we consider the distributed optimization of non-smooth convex functions using a network of computing units. We investigate this problem under two regularity assumptions: (1) the Lipschitz continuity of the global objective function, and (2) the Lipschitz continuity of local individual functions. Under the local regularity assumption, we provide the first optimal first-order decentralized algorithm called multi-step primal-dual (MSPD) and its corresponding optimal convergence rate. A notable aspect of this result is that, for non-smooth functions, while the dominant term of the error is in $O(1/\\sqrt{t})$, the structure of the communication network only impacts a second-order term in $O(1/t)$, where $t$ is time. In other words, the error due to limits in communication resources decreases at a fast rate even in the case of non-strongly-convex objective functions. Under the global regularity assumption, we provide a simple yet efficient algorithm called distributed randomized smoothing (DRS) based on a local smoothing of the objective function, and show that DRS is within a $d^{1/4}$ multiplicative factor of the optimal convergence rate, where $d$ is the underlying dimension. <|reference_end|>"
] | [
5,
6,
11,
14
] | {"<|cite_1|>": "arxiv-159894", "<|cite_2|>": "ss-1262497", "<|cite_3|>": "ss-1538025", "<|cite_4|>": "ss-1262497", "<|cite_5|>": "arxiv-135671", "<|cite_6|>": "ss-1029383", "<|cite_7|>": "ss-1029383", "<|cite_8|>": "ss-1262497", "<|cite_9|>": "ss-1262497", "<|cite_10|>": "ss-1538025", "<|cite_11|>": "arxiv-135671", "<|cite_12|>": "arxiv-135671", "<|cite_13|>": "arxiv-135671", "<|cite_14|>": "arxiv-13509", "<|cite_15|>": "ss-1119924", "<|cite_16|>": "ss-1119924", "<|cite_17|>": "ss-1538025", "<|cite_18|>": "ss-1537299", "<|cite_19|>": "ss-1119924"} |
2212.14670 | <|paper_start|> Title: Hierarchical Deep Reinforcement Learning for VWAP Strategy Optimization
Abstract: Hierarchical Deep Reinforcement Learning for VWAP Strategy Optimization: Designing an intelligent volume-weighted average price (VWAP) strategy is a critical concern for brokers, since traditional rule-based strategies are relatively static that cannot achieve a lower transaction cost in a dynamic market. Many studies have tried to minimize the cost via reinforcement learning, but there are bottlenecks in improvement, especially for long-duration strategies such as the VWAP strategy. To address this issue, we propose a deep learning and hierarchical reinforcement learning jointed architecture termed Macro-Meta-Micro Trader (M3T) to capture market patterns and execute orders from different temporal scales. The Macro Trader first allocates a parent order into tranches based on volume profiles as the traditional VWAP strategy does, but a long short-term memory neural network is used to improve the forecasting accuracy. Then the Meta Trader selects a short-term subgoal appropriate to instant liquidity within each tranche to form a mini-tranche. The Micro Trader consequently extracts the instant market state and fulfils the subgoal with the lowest transaction cost. Our experiments over stocks listed on the Shanghai stock exchange demonstrate that our approach outperforms baselines in terms of VWAP slippage, with an average cost saving of 1.16 base points compared to the optimal baseline.
Introduction
\label{sec:introduction}}
\IEEEPARstart{W}{hen} executing a large order (\emph{a.k.a.} a parent order), brokers may encounter a large negative slippage caused by market impacts. The large order makes the market price fall (or rise) suddenly, resulting in the actual trading price being far from expected, which increases the transaction costs. An effective way to solve this problem, known as algorithmic trading, is to split the parent order into several small orders (\emph{a.k.a.} child orders) over time and execute these orders according to a preset rule.
In 1988, Berkowitz~\emph{et al.} <|cite_start|> (Reference: The total cost of transactions on the NYSE: A measure of execution on market impact cost is developed; it is the difference between a transaction price and th e volume weighted average price for that day. Fourteen thousand insti tutional trades are examined. Market impact costs average five basis points. Commission costs average eighteen basis points. Total costs a verage twenty-three basis points. Total costs vary only slightly acro ss brokers and vary greatly across money managers. There is no trade- off between commission costs and market impact costs. Copyright 1988 by American Finance Association.) <|cite_end|> suggested using the difference between market volume-weighted average price (VWAP) and orders' VWAP as a metric for transaction costs, \emph{a.k.a.} VWAP slippage. They split a parent order based on the market liquidity to reduce the VWAP slippage, known as the VWAP strategy. Since it needed few assumptions about market microstructures <|cite_start|> (Reference: An End-to-End Optimal Trade Execution Framework based on Proximal Policy Optimization: In this article, we propose an end-to-end adaptive framework for optimal trade execution based on Proximal Policy Optimization (PPO). We use two methods to account for the time dependencies in the market data based on two different neural network architecture: 1) Long short-term memory (LSTM) networks, 2) Fully-connected networks (FCN) by stacking the most recent limit orderbook (LOB) information as model inputs. The proposed framework can make trade execution decisions based on level-2 limit order book (LOB) information such as bid/ask prices and volumes directly without manually designed attributes as in previous research. Furthermore, we use a sparse reward function, which gives the agent reward signals at the end of each episode as an indicator of its relative performances against the baseline model, rather than implementation shortfall (IS) or a shaped reward function. The experimental results have demonstrated advantages over IS and the shaped reward function in terms of performance and simplicity. The proposed framework has outperformed the industry commonly used baseline models such as TWAP, VWAP, and AC as well as several Deep Reinforcement Learning (DRL) models on most of the 14 US equities in our experiments.) <|cite_end|>, brokers widely adopt it in order execution. However, this static rule-based strategy cannot obtain a lower transaction cost in a dynamic market. Researchers began to improve the VWAP strategy from two aspects. One was to improve the estimation of volume profiles <|cite_start|> (Reference: Optimal slice of a VWAP trade: ) <|cite_end|> <|cite_start|> (Reference: Improving VWAP. Strategies: A Dynamical Volume Approach: In this paper, we present a new methodology for modelling intraday volume, which allows for a reduction of the execution risk in VWAP (Volume Weighted Average Price) orders. The results are obtained for all the stocks included in the CAC40 index at the beginning of September 2004. The idea of considered models is based on the decomposition of traded volume into two parts: one reflects volume changes due to market evolution; the second describes the stock specific volume pattern. The dynamic of the specific volume part is depicted by ARMA and SETAR models. The implementation of VWAP strategies allows some dynamic adjustments during the day in order to improve tracking of the end-of-day VWAP.) <|cite_end|>. The other was to adjust the trading strategy according to recent historical price and volume information <|cite_start|> (Reference: Competitive algorithms for VWAP and limit order trading: We introduce new online models for two important aspectsof modern financial markets: Volume Weighted Average Pricetrading and limit order books. We provide an extensivestudy of competitive algorithms in these models and relatethem to earlier online algorithms for stock trading.) <|cite_end|> <|cite_start|> (Reference: Bayesian adaptive trading with a daily cycle: Standard models of algorithmic trading neglect the presence of a daily cycle. We construct a model in which the trader uses information from observations of price evolution during the day to continuously update his estimate of other traders' target sizes and directions. He uses this information to determine an optimal trade schedule to minimize total expected cost of trading, subject to sign constraints (never buy as part of a sell program). We argue that although these strategies are determined using very simple dynamic reasoning—at each moment they assume that current conditions will last until the end of trading—they are in fact the globally optimal strategies as would be determined by dynamic programming.) <|cite_end|>. Nevertheless, these approaches relied on strict modeling and assumptions about the market's microstructures, and the unpredictable market limited their efficiency <|cite_start|> (Reference: An End-to-End Optimal Trade Execution Framework based on Proximal Policy Optimization: In this article, we propose an end-to-end adaptive framework for optimal trade execution based on Proximal Policy Optimization (PPO). We use two methods to account for the time dependencies in the market data based on two different neural network architecture: 1) Long short-term memory (LSTM) networks, 2) Fully-connected networks (FCN) by stacking the most recent limit orderbook (LOB) information as model inputs. The proposed framework can make trade execution decisions based on level-2 limit order book (LOB) information such as bid/ask prices and volumes directly without manually designed attributes as in previous research. Furthermore, we use a sparse reward function, which gives the agent reward signals at the end of each episode as an indicator of its relative performances against the baseline model, rather than implementation shortfall (IS) or a shaped reward function. The experimental results have demonstrated advantages over IS and the shaped reward function in terms of performance and simplicity. The proposed framework has outperformed the industry commonly used baseline models such as TWAP, VWAP, and AC as well as several Deep Reinforcement Learning (DRL) models on most of the 14 US equities in our experiments.) <|cite_end|>.
Since stock trading is essentially a decision process consistent with the problem that reinforcement learning (RL) aims to solve, many researchers began to optimize order execution strategies via RL models. Nevmyvaka~\emph{et al.} <|cite_start|> (Reference: Reinforcement learning for optimized trade execution: We present the first large-scale empirical application of reinforcement learning to the important problem of optimized trade execution in modern financial markets. Our experiments are based on 1.5 years of millisecond time-scale limit order data from NASDAQ, and demonstrate the promise of reinforcement learning methods to market microstructure problems. Our learning algorithm introduces and exploits a natural "low-impact" factorization of the state space.) <|cite_end|> first revealed the efficiency of RL in optimizing transaction costs by a large-scale empirical application. Then many researchers began to leverage the RL models to optimize the order execution strategies <|cite_start|> (Reference: An End-to-End Optimal Trade Execution Framework based on Proximal Policy Optimization: In this article, we propose an end-to-end adaptive framework for optimal trade execution based on Proximal Policy Optimization (PPO). We use two methods to account for the time dependencies in the market data based on two different neural network architecture: 1) Long short-term memory (LSTM) networks, 2) Fully-connected networks (FCN) by stacking the most recent limit orderbook (LOB) information as model inputs. The proposed framework can make trade execution decisions based on level-2 limit order book (LOB) information such as bid/ask prices and volumes directly without manually designed attributes as in previous research. Furthermore, we use a sparse reward function, which gives the agent reward signals at the end of each episode as an indicator of its relative performances against the baseline model, rather than implementation shortfall (IS) or a shaped reward function. The experimental results have demonstrated advantages over IS and the shaped reward function in terms of performance and simplicity. The proposed framework has outperformed the industry commonly used baseline models such as TWAP, VWAP, and AC as well as several Deep Reinforcement Learning (DRL) models on most of the 14 US equities in our experiments.) <|cite_end|> <|cite_start|> (Reference: Double Deep Q-Learning for Optimal Execution: Optimal trade execution is an important problem faced by essentially all traders. Much research into optimal execution uses stringent model assumptions and applies continuous time stochastic control to solve them. Here, we instead take a model free approach and develop a variation of Deep Q-Learning to estimate the optimal actions of a trader. The model is a fully connected Neural Network trained using Experience Replay and Double DQN with input features given by the current state of the limit order book, other trading signals, and available execution actions, while the output is the Q-value function estimating the future rewards under an arbitrary action. We apply our model to nine different stocks and find that it outperforms the standard benchmark approach on most stocks using the measures of (i) mean and median out-performance, (ii) probability of out-performance, and (iii) gain-loss ratios.) <|cite_end|>. However, these studies mainly focused on strategies executed for a few minutes, as shown in Table~\ref{tab:summary}. They ignored the challenging strategies that took longer time to execute, such as the VWAP strategy. As many studies <|cite_start|> (Reference: Market making and reversal on the stock exchange: Abstract The accurate record of stock market ticker prices displays striking properties of dependence. We find for example that after a decline of 1/8 of a point between transactions, an advance on the next transaction is three times as likely as a decline. Further examinations disclose that after two price changes in the same direction, the odds in favor of a continuation in that direction are almost twice as great as after two changes in opposite directions. The dealer (specialist) in a stock typically quotes the market by announcing the highest buy order and lowest sell order carried on his book. But these orders tend to be concentrated at integers (26, 43), halves , quarters and odd eighths in descending preference. This non-uniform distribution of orders produces some non-random effects in stock price motion. These properties of the stock market are typical of markets in many second-hand goods.) <|cite_end|> <|cite_start|> (Reference: The dynamics of discrete bid and ask quotes: This paper presents an empirical microstructure model of bid and ask quotes that features discreteness, random costs of market making, and ARCH volatility effects. Applied to intraday quotes at 15-minute intervals for Alcoa (a randomly chosen Dow stock), the results show that quote exposure costs contain stochastic components that are persistent and large relative to the deterministic intraday "U" components. Analysis of the filtered estimates of the system suggest that bid and ask costs contain common components, and that these costs reflect risk as proxied by ARCH variance forecasts. Copyright The American Finance Association 1999.) <|cite_end|> demonstrated that there was a significant daily periodicity in trading intensity, the fluctuating intensity and complicated market patterns increased the complexity of optimal decision making. It is vital to design an RL-based trader that can capture variations of price and liquidity throughout the day as well as micro changes within a few seconds on the limit order books (LOBs) without making strict assumptions.
In addition, although recent representative research <|cite_start|> (Reference: Human-level control through deep reinforcement learning: ) <|cite_end|> revealed that deep Q-learning network (DQN) outperformed human players in most Atari games, it failed to learn an effective policy in \textit{Montezuma's Revenge}, where there were various goals and sparse rewards. A practical approach to address this problem was known as hierarchical reinforcement learning (HRL), which solved this problem by establishing temporal abstractions for the environment and decomposing the decision process into several simpler subgoals, showing good adaptability and efficiency in addressing complex RL tasks <|cite_start|> (Reference: Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation: Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning. A top-level value function learns a policy over intrinsic goals, and a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse, delayed feedback: (1) a complex discrete stochastic decision process, and (2) the classic ATARI game `Montezuma's Revenge'.) <|cite_end|>.
By incorporating the idea of hierarchical decision, we propose a deep learning (DL) and HRL jointed architecture termed Macro-Meta-Micro Trader (M3T) to optimize the VWAP strategy. The M3T captures market patterns and executes orders from different temporal scales by three hierarchical traders: Macro Trader, Meta Trader, and Micro Trader. It optimizes the VWAP strategy by improving trading volume estimation, parent order allocation, subgoal selection, and child order execution. Firstly, the Macro Trader adopts a long short-term memory (LSTM) network <|cite_start|> (Reference: Long {Short-Term} memory: Model compression is significant for the wide adoption of Recurrent Neural Networks (RNNs) in both user devices possessing limited resources and business clusters requiring quick responses to large-scale service requests. This work aims to learn structurally-sparse Long Short-Term Memory (LSTM) by reducing the sizes of basic structures within LSTM units, including input updates, gates, hidden states, cell states and outputs. Independently reducing the sizes of basic structures can result in inconsistent dimensions among them, and consequently, end up with invalid LSTM units. To overcome the problem, we propose Intrinsic Sparse Structures (ISS) in LSTMs. Removing a component of ISS will simultaneously decrease the sizes of all basic structures by one and thereby always maintain the dimension consistency. By learning ISS within LSTM units, the obtained LSTMs remain regular while having much smaller basic structures. Based on group Lasso regularization, our method achieves 10.59× speedup without losing any perplexity of a language modeling of Penn TreeBank dataset. It is also successfully evaluated through a compact model with only 2.69M weights for machine Question Answering of SQuAD dataset. Our approach is successfully extended to nonLSTM RNNs, like Recurrent Highway Networks (RHNs). Our source code is available1.) <|cite_end|> to estimate future volume profiles and allocates a parent order to tranches accordingly. Secondly, the Meta Trader selects a subgoal appropriate to instant liquidity to form a mini-tranche. Finally, the Micro Trader extracts the market state by a multi-headed self-attention encoder and executes orders to fulfils the subgoal and reduce transaction costs. To verify the efficiency of our approach, we perform extensive experiments over eight stocks listed on the Shanghai stock exchange (SSE). The experimental results demonstrate that our M3T outperforms baselines in terms of VWAP slippage on all stocks, with an average cost saving of 1.16 base points compared to the optimal baseline, \emph{i.e.}, double DQN (DDQN) <|cite_start|> (Reference: Deep Reinforcement Learning with Double Q-learning: The popular Q-learning algorithm is known to overestimate action values under certain conditions. It was not previously known whether, in practice, such overestimations are common, whether they harm performance, and whether they can generally be prevented. In this paper, we answer all these questions affirmatively. In particular, we first show that the recent DQN algorithm, which combines Q-learning with a deep neural network, suffers from substantial overestimations in some games in the Atari 2600 domain. We then show that the idea behind the Double Q-learning algorithm, which was introduced in a tabular setting, can be generalized to work with large-scale function approximation. We propose a specific adaptation to the DQN algorithm and show that the resulting algorithm not only reduces the observed overestimations, as hypothesized, but that this also leads to much better performance on several games.) <|cite_end|>. Further analysis also demonstrates that each trader in M3T can effectively reduce the transaction costs.
Our main contributions are summarized as follows:
\begin{enumerate}
\item We propose a DL and HRL jointed architecture termed M3T, which optimizes the VWAP strategy from multiple layers of trading volume estimation, parent order allocation, subgoal selection, and child order execution. The optimal VWAP strategy is obtained by optimizing each layer hierarchically in the M3T. To our best knowledge, it is the first study incorporating RL into VWAP strategy optimization.
\item We extend Markov decision process (MDP) to hierarchical Markov decision process (HMDP), introducing hierarchical control into the decision process. We also consider microstructures of the Chinese stock market that are quite different from previous studies in trading simulation and the design of HMDP, such as minimal trade size.
\item We discuss the effectiveness of different DL and RL models on the VWAP strategy optimization, and the experimental results verify the efficiency of the M3T.
\end{enumerate}
The rest of this paper is organized as follows. Section~\ref{sec:rw} discusses the related studies on algorithmic trading and HRL. Section~\ref{sec:pf} introduces the VWAP strategy and proposes HMDP for M3T optimization. Section~\ref{sec:m3t} proposes our M3T architecture. Section~\ref{sec:exp} describes the experimental setup and reports the experimental results and related discussions. Section~\ref{sec:con} provides our conclusions.
Related Work
\label{sec:rw}
\subsection{Algorithmic Trading}
Bertsimas and Lo <|cite_start|> (Reference: Optimal control of execution costs: ) <|cite_end|> were the first to use dynamic programming to find the explicit closed-form solution of order execution. Almgren and Chriss <|cite_start|> (Reference: Optimal execution of portfolio transactions: ) <|cite_end|> proposed the Almgren-Chriss model, extending Bertsimas and Lo's method by adding cost evaluation, price impact functions, and risk aversion parameters. Huberman and Stanzl <|cite_start|> (Reference: Optimal liquidity trading: A liquidity trader wishes to trade a fixed number of shares within a certain time horizon and to minimize the mean and variance of the costs of trading. Explicit formulas for the optimal trading strategies show that risk-averse liquidity traders reduce their order sizes over time and execute a higher fraction of their total trading volume in early periods when price volatility or liquidity increases. In the presence of transaction fees, traders want to trade less often when either price volatility or liquidity goes up or when the speed of price reversion declines. In the multi-asset case, price effects across assets have a substantial impact on trading behavior.) <|cite_end|> considered traders' appetite for liquidity risk in order execution. However, these studies make strong assumptions about underlying price movements or distributions, which are difficult to apply to real stock markets. Berkowitz~\emph{et al.} <|cite_start|> (Reference: The total cost of transactions on the NYSE: A measure of execution on market impact cost is developed; it is the difference between a transaction price and th e volume weighted average price for that day. Fourteen thousand insti tutional trades are examined. Market impact costs average five basis points. Commission costs average eighteen basis points. Total costs a verage twenty-three basis points. Total costs vary only slightly acro ss brokers and vary greatly across money managers. There is no trade- off between commission costs and market impact costs. Copyright 1988 by American Finance Association.) <|cite_end|> suggested using VWAP slippage as the metric of the transaction cost. They proposed the VWAP strategy that split a parent order based on the market liquidity to reduce the VWAP slippage. However, these static strategies could not obtain optimal costs in a dynamic market. There were two main approaches to enhancing the VWAP strategy. The one was to improve the forecasting of volume profiles. Konishi <|cite_start|> (Reference: Optimal slice of a VWAP trade: ) <|cite_end|> modeled the fluctuation pattern of price and volume as Brownian movement. Bialkowski~\emph{et al.} <|cite_start|> (Reference: Improving VWAP. Strategies: A Dynamical Volume Approach: In this paper, we present a new methodology for modelling intraday volume, which allows for a reduction of the execution risk in VWAP (Volume Weighted Average Price) orders. The results are obtained for all the stocks included in the CAC40 index at the beginning of September 2004. The idea of considered models is based on the decomposition of traded volume into two parts: one reflects volume changes due to market evolution; the second describes the stock specific volume pattern. The dynamic of the specific volume part is depicted by ARMA and SETAR models. The implementation of VWAP strategies allows some dynamic adjustments during the day in order to improve tracking of the end-of-day VWAP.) <|cite_end|> proposed a volume forecasting model combining market features and stock features. The other was to improve the strategy using recent price and volume information. Kakade~\emph{et al.} <|cite_start|> (Reference: Competitive algorithms for VWAP and limit order trading: We introduce new online models for two important aspectsof modern financial markets: Volume Weighted Average Pricetrading and limit order books. We provide an extensivestudy of competitive algorithms in these models and relatethem to earlier online algorithms for stock trading.) <|cite_end|> introduced the competitive ratio into the VWAP strategy to reflect the market's liquidity and dynamically adjusted child orders. Almgren and Lorenz <|cite_start|> (Reference: Bayesian adaptive trading with a daily cycle: Standard models of algorithmic trading neglect the presence of a daily cycle. We construct a model in which the trader uses information from observations of price evolution during the day to continuously update his estimate of other traders' target sizes and directions. He uses this information to determine an optimal trade schedule to minimize total expected cost of trading, subject to sign constraints (never buy as part of a sell program). We argue that although these strategies are determined using very simple dynamic reasoning—at each moment they assume that current conditions will last until the end of trading—they are in fact the globally optimal strategies as would be determined by dynamic programming.) <|cite_end|> minimized the transaction costs via Bayesian estimators of intraday price shifts. However, these approaches mainly relied on the assumptions of markets' microstructures, such as market dynamics, order types, trading rules, etc. Markets' unpredictable patterns limited their capabilities.
As reinforcement learning brought a new way to find the optimal order execution strategy by automatically learning the underlying dynamic of markets from massive data, many studies began to leverage RL methods to optimize trading strategies dynamically. Table~\ref{tab:summary} summarizes the existing studies on RL-based algorithmic trading strategies for sale side. The current studies mainly focus on orders with short execution times and small order sizes, which still has a big gap with actual trading task requirements. Nevmyvaka~\emph{et al.} <|cite_start|> (Reference: Reinforcement learning for optimized trade execution: We present the first large-scale empirical application of reinforcement learning to the important problem of optimized trade execution in modern financial markets. Our experiments are based on 1.5 years of millisecond time-scale limit order data from NASDAQ, and demonstrate the promise of reinforcement learning methods to market microstructure problems. Our learning algorithm introduces and exploits a natural "low-impact" factorization of the state space.) <|cite_end|> proposed the first large-scale empirical application of optimizing order execution via RL. They minimized the implementation shortfall (IS) for a buy-side (or sell-side) within a short discrete period. Hendricks and Wilcox <|cite_start|> (Reference: A reinforcement learning extension to the Almgren-Chriss framework for optimal trade execution: Reinforcement learning is explored as a candidate machine learning technique to enhance existing analytical solutions for optimal trade execution with elements from the market microstructure. Given a volume-to-trade, fixed time horizon and discrete trading periods, the aim is to adapt a given volume trajectory such that it is dynamic with respect to favourable/unfavourable conditions during realtime execution, thereby improving overall cost of trading. We consider the standard Almgren-Chriss model with linear price impact as a candidate base model. This model is popular amongst sell-side institutions as a basis for arrival price benchmark execution algorithms. By training a learning agent to modify a volume trajectory based on the market's prevailing spread and volume dynamics, we are able to improve post-trade implementation shortfall by up to 10.3% on average compared to the base model, based on a sample of stocks and trade sizes in the South African equity market.) <|cite_end|> optimized IS by using Q-learning to revise the volume derived from the Almgren-Chriss model. Recently, the advent of the DQN <|cite_start|> (Reference: Human-level control through deep reinforcement learning: ) <|cite_end|> has provided a powerful representation ability for RL agents. Ning~\emph{et al.} <|cite_start|> (Reference: Double Deep Q-Learning for Optimal Execution: Optimal trade execution is an important problem faced by essentially all traders. Much research into optimal execution uses stringent model assumptions and applies continuous time stochastic control to solve them. Here, we instead take a model free approach and develop a variation of Deep Q-Learning to estimate the optimal actions of a trader. The model is a fully connected Neural Network trained using Experience Replay and Double DQN with input features given by the current state of the limit order book, other trading signals, and available execution actions, while the output is the Q-value function estimating the future rewards under an arbitrary action. We apply our model to nine different stocks and find that it outperforms the standard benchmark approach on most stocks using the measures of (i) mean and median out-performance, (ii) probability of out-performance, and (iii) gain-loss ratios.) <|cite_end|> used DQN directly to train trading agents based on high-dimensional LOB data. Ye~\emph{et al.} <|cite_start|> (Reference: Optimal Trade Execution Based on Deep Deterministic Policy Gradient: ) <|cite_end|> used convolution neural networks to extract spatial information of LOBs and minimized the transaction costs by deep deterministic policy gradient. Lin and Beling <|cite_start|> (Reference: An End-to-End Optimal Trade Execution Framework based on Proximal Policy Optimization: In this article, we propose an end-to-end adaptive framework for optimal trade execution based on Proximal Policy Optimization (PPO). We use two methods to account for the time dependencies in the market data based on two different neural network architecture: 1) Long short-term memory (LSTM) networks, 2) Fully-connected networks (FCN) by stacking the most recent limit orderbook (LOB) information as model inputs. The proposed framework can make trade execution decisions based on level-2 limit order book (LOB) information such as bid/ask prices and volumes directly without manually designed attributes as in previous research. Furthermore, we use a sparse reward function, which gives the agent reward signals at the end of each episode as an indicator of its relative performances against the baseline model, rather than implementation shortfall (IS) or a shaped reward function. The experimental results have demonstrated advantages over IS and the shaped reward function in terms of performance and simplicity. The proposed framework has outperformed the industry commonly used baseline models such as TWAP, VWAP, and AC as well as several Deep Reinforcement Learning (DRL) models on most of the 14 US equities in our experiments.) <|cite_end|> leveraged LSTM networks to extract features from high-dimensional market information as agents' observations and optimized the IS with the proximal policy optimization method. Nevertheless, the existing studies mainly focused on optimizing short-duration order execution and paid less attention to others.
\begin{table}[t]
\caption{Existing Studies of RL in Algorithmic Trading.\label{tab:summary}}
\begin{threeparttable}
\begin{tabular*}{\hsize}{@{\extracolsep{\fill}}|l|r|r|}
\hline
\textbf{Author} & \textbf{Execution Time} & \textbf{Total Order Size} \\
\hline
Nevmyvaka~\emph{et al.} <|cite_start|> (Reference: Reinforcement learning for optimized trade execution: We present the first large-scale empirical application of reinforcement learning to the important problem of optimized trade execution in modern financial markets. Our experiments are based on 1.5 years of millisecond time-scale limit order data from NASDAQ, and demonstrate the promise of reinforcement learning methods to market microstructure problems. Our learning algorithm introduces and exploits a natural "low-impact" factorization of the state space.) <|cite_end|> & 2$\sim$8 min & 5K; 10K \\
Hendricks and Wilcox <|cite_start|> (Reference: A reinforcement learning extension to the Almgren-Chriss framework for optimal trade execution: Reinforcement learning is explored as a candidate machine learning technique to enhance existing analytical solutions for optimal trade execution with elements from the market microstructure. Given a volume-to-trade, fixed time horizon and discrete trading periods, the aim is to adapt a given volume trajectory such that it is dynamic with respect to favourable/unfavourable conditions during realtime execution, thereby improving overall cost of trading. We consider the standard Almgren-Chriss model with linear price impact as a candidate base model. This model is popular amongst sell-side institutions as a basis for arrival price benchmark execution algorithms. By training a learning agent to modify a volume trajectory based on the market's prevailing spread and volume dynamics, we are able to improve post-trade implementation shortfall by up to 10.3% on average compared to the base model, based on a sample of stocks and trade sizes in the South African equity market.) <|cite_end|> & $20\sim60$ min & 10K; 1M\\
Shen~\emph{et al.} <|cite_start|> (Reference: Risk-averse reinforcement learning for algorithmic trading: We propose a general framework of risk-averse reinforcement learning for algorithmic trading. Our approach is tested in an experiment based on 1.5 years of millisecond time-scale limit order data from NASDAQ, which contain the data around the 2010 flash crash. The results show that our algorithm outperforms the risk-neutral reinforcement learning algorithm by 1) keeping the trading cost at a substantially low level at the spot when the flash crash happened, and 2) significantly reducing the risk over the whole test period.) <|cite_end|> & 10 min & 20K \\
Ning~\emph{et al.} <|cite_start|> (Reference: Double Deep Q-Learning for Optimal Execution: Optimal trade execution is an important problem faced by essentially all traders. Much research into optimal execution uses stringent model assumptions and applies continuous time stochastic control to solve them. Here, we instead take a model free approach and develop a variation of Deep Q-Learning to estimate the optimal actions of a trader. The model is a fully connected Neural Network trained using Experience Replay and Double DQN with input features given by the current state of the limit order book, other trading signals, and available execution actions, while the output is the Q-value function estimating the future rewards under an arbitrary action. We apply our model to nine different stocks and find that it outperforms the standard benchmark approach on most stocks using the measures of (i) mean and median out-performance, (ii) probability of out-performance, and (iii) gain-loss ratios.) <|cite_end|> & 60 min & 2K \\
Ye~\emph{et al.} <|cite_start|> (Reference: Optimal Trade Execution Based on Deep Deterministic Policy Gradient: ) <|cite_end|> & 2 min & 5K; 10K \\
Lin and Beling <|cite_start|> (Reference: An End-to-End Optimal Trade Execution Framework based on Proximal Policy Optimization: In this article, we propose an end-to-end adaptive framework for optimal trade execution based on Proximal Policy Optimization (PPO). We use two methods to account for the time dependencies in the market data based on two different neural network architecture: 1) Long short-term memory (LSTM) networks, 2) Fully-connected networks (FCN) by stacking the most recent limit orderbook (LOB) information as model inputs. The proposed framework can make trade execution decisions based on level-2 limit order book (LOB) information such as bid/ask prices and volumes directly without manually designed attributes as in previous research. Furthermore, we use a sparse reward function, which gives the agent reward signals at the end of each episode as an indicator of its relative performances against the baseline model, rather than implementation shortfall (IS) or a shaped reward function. The experimental results have demonstrated advantages over IS and the shaped reward function in terms of performance and simplicity. The proposed framework has outperformed the industry commonly used baseline models such as TWAP, VWAP, and AC as well as several Deep Reinforcement Learning (DRL) models on most of the 14 US equities in our experiments.) <|cite_end|> & 2 min & $300\sim7$K \\
Fang~\emph{et al.} <|cite_start|> (Reference: Universal Trading for Order Execution with Oracle Policy Distillation: As a fundamental problem in algorithmic trading, order execution aims at fulfilling a specific trading order, either liquidation or acquirement, for a given instrument. Towards effective execution strategy, recent years have witnessed the shift from the analytical view with model-based market assumptions to model-free perspective, i.e., reinforcement learning, due to its nature of sequential decision optimization. However, the noisy and yet imperfect market information that can be leveraged by the policy has made it quite challenging to build up sample efficient reinforcement learning methods to achieve effective order execution. In this paper, we propose a novel universal trading policy optimization framework to bridge the gap between the noisy yet imperfect market states and the optimal action sequences for order execution. Particularly, this framework leverages a policy distillation method that can better guide the learning of the common policy towards practically optimal execution by an oracle teacher with perfect information to approximate the optimal trading strategy. The extensive experiments have shown significant improvements of our method over various strong baselines, with reasonable trading actions.) <|cite_end|> & 30 min & $<1$K \\
Wang~\emph{et al.} <|cite_start|> (Reference: Commission fee is not enough: A hierarchical reinforced framework for portfolio management: Portfolio management via reinforcement learning is at the forefront of fintech research, which explores how to optimally reallocate a fund into different financial assets over the long term by trial-and-error. Existing methods are impractical since they usually assume each reallocation can be finished immediately and thus ignoring the price slippage as part of the trading cost. To address these issues, we propose a hierarchical reinforced stock trading system for portfolio management (HRPM). Concretely, we decompose the trading process into a hierarchy of portfolio management over trade execution and train the corresponding policies. The high-level policy gives portfolio weights at a lower frequency to maximize the long-term profit and invokes the low-level policy to sell or buy the corresponding shares within a short time window at a higher frequency to minimize the trading cost. We train two levels of policies via a pre-training scheme and an iterative training scheme for data efficiency. Extensive experimental results in the U.S. market and the China market demonstrate that HRPM achieves significant improvement against many state-of-the-art approaches.) <|cite_end|> & 20 min & $10$K$\sim40$K \\
\textbf{Ours} & 240 min & $100K\sim160K$\tnote{*} \\
\hline
\end{tabular*}
\begin{tablenotes}
\item[*] The total order size of our method is proportional to the given minimum trading unit. Increasing the minimum unit allows our method to be adapted to larger orders.
\end{tablenotes}
\end{threeparttable}
\end{table}
Besides, some studies on portfolio management have also noticed the transaction costs caused by the adjustment of portfolio position in recent years. Fang~\emph{et al.} <|cite_start|> (Reference: Universal Trading for Order Execution with Oracle Policy Distillation: As a fundamental problem in algorithmic trading, order execution aims at fulfilling a specific trading order, either liquidation or acquirement, for a given instrument. Towards effective execution strategy, recent years have witnessed the shift from the analytical view with model-based market assumptions to model-free perspective, i.e., reinforcement learning, due to its nature of sequential decision optimization. However, the noisy and yet imperfect market information that can be leveraged by the policy has made it quite challenging to build up sample efficient reinforcement learning methods to achieve effective order execution. In this paper, we propose a novel universal trading policy optimization framework to bridge the gap between the noisy yet imperfect market states and the optimal action sequences for order execution. Particularly, this framework leverages a policy distillation method that can better guide the learning of the common policy towards practically optimal execution by an oracle teacher with perfect information to approximate the optimal trading strategy. The extensive experiments have shown significant improvements of our method over various strong baselines, with reasonable trading actions.) <|cite_end|> suggested optimizing algorithmic trading strategies based on knowledge distillation. They used a teacher network with future information to guide a student network with historical data to learn the optimal strategy. Wang~\emph{et al.} <|cite_start|> (Reference: Commission fee is not enough: A hierarchical reinforced framework for portfolio management: Portfolio management via reinforcement learning is at the forefront of fintech research, which explores how to optimally reallocate a fund into different financial assets over the long term by trial-and-error. Existing methods are impractical since they usually assume each reallocation can be finished immediately and thus ignoring the price slippage as part of the trading cost. To address these issues, we propose a hierarchical reinforced stock trading system for portfolio management (HRPM). Concretely, we decompose the trading process into a hierarchy of portfolio management over trade execution and train the corresponding policies. The high-level policy gives portfolio weights at a lower frequency to maximize the long-term profit and invokes the low-level policy to sell or buy the corresponding shares within a short time window at a higher frequency to minimize the trading cost. We train two levels of policies via a pre-training scheme and an iterative training scheme for data efficiency. Extensive experimental results in the U.S. market and the China market demonstrate that HRPM achieves significant improvement against many state-of-the-art approaches.) <|cite_end|> proposed a two-layer hierarchical RL model, separating the portfolio management and order execution. The upper agent determined the portfolio position ratio, while the lower agent adjusted the position and minimized transaction costs. However, the position adjustment window was short. These methods' effectiveness was only empirically verified on a short time scale. Therefore, designing an RL-based trader sensitive to both micro and macro changes in price and liquidity throughout the day was significant.
\subsection{Hierarchical Reinforcement Learning}
Sparse rewards and long execution times presented tremendous challenges for training RL agents. An effective way to address the challenges was by establishing temporal abstractions to decompose decision processes into simpler subgoals. Dayan and Hinton <|cite_start|> (Reference: Feudal Reinforcement Learning: One way to speed up reinforcement learning is to enable learning to happen simultaneously at multiple resolutions in space and time. This paper shows how to create a Q-learning managerial hierarchy in which high level managers learn how to set tasks to their submanagers who, in turn, learn how to satisfy them. Submanagers need not initially understand their managers' commands. They simply learn to maximise their reinforcement in the context of the current command.
We illustrate the system using a simple maze task. As the system learns how to get around, satisfying commands at the multiple levels, it explores more efficiently than standard, flat, Q-learning and builds a more comprehensive map.) <|cite_end|> first proposed the idea of multi-level control by dividing the task into three levels: super-manager, manager, and sub-manager. All the levels obeyed two principles: 1) reward hiding, the agent only needed to fulfil the task given by the immediate superior agent rather than considering the completion of the overall task; 2) information hiding, the sub-manager did not need to know the goal issued by the super-manager, and the super-manager did not need to know how the sub-manager achieved the goal. This work provided a prototype for subsequent studies, guiding the design of HRL structures.
After that, Sutton~\emph{et al.} <|cite_start|> (Reference: Between {{MDPs}} and semi-{{MDPs}}: {{A}} framework for temporal abstraction in reinforcement learning: Learning, planning, and representing knowledge at multiple levels of temporal abstraction are key, longstanding challenges for AI. In this paper we consider how these challenges can be addressed within the mathematical framework of reinforcement learning and Markov decision processes (MDPs). We extend the usual notion of action in this framework to include options —closed-loop policies for taking action over a period of time. Examples of options include picking up an object, going to lunch, and traveling to a distant city, as well as primitive actions such as muscle twitches and joint torques. Overall, we show that options enable temporally abstract knowledge and action to be included in the reinforcement learning framework in a natural and general way. In particular, we show that options may be used interchangeably with primitive actions in planning methods such as dynamic programming and in learning methods such as Q-learning. Formally, a set of options defined over an MDP constitutes a semi-Markov decision process (SMDP), and the theory of SMDPs provides the foundation for the theory of options. However, the most interesting issues concern the interplay between the underlying MDP and the SMDP and are thus beyond SMDP theory. We present results for three such cases: (1) we show that the results of planning with options can be used during execution to interrupt options and thereby perform even better than planned, (2) we introduce new intra-option methods that are able to learn about an option from fragments of its execution, and (3) we propose a notion of subgoal that can be used to improve the options themselves. All of these results have precursors in the existing literature; the contribution of this paper is to establish them in a simpler and more general setting with fewer changes to the existing reinforcement learning framework. In particular, we show that these results can be obtained without committing to (or ruling out) any particular approach to state) <|cite_end|> combined the idea of temporal abstraction over action set with MDP and SMDP. In their option framework, the upper agent selects an original action or a combination of original actions as an option in each step, while the lower agent defines a policy over actions for each option.
Sorg and Singh <|cite_start|> (Reference: Linear options: Learning, planning, and representing knowledge in large state spaces at multiple levels of temporal abstraction are key, long-standing challenges for building flexible autonomous agents. The options framework provides a formal mechanism for specifying and learning temporally-extended skills. Although past work has demonstrated the benefit of acting according to options in continuous state spaces, one of the central advantages of temporal abstraction---the ability to plan using a temporally abstract model---remains a challenging problem when the number of environment states is large or infinite. In this work, we develop a knowledge construct, the linear option, which is capable of modeling temporally abstract dynamics in continuous state spaces. We show that planning with a linear expectation model of an option's dynamics converges to a fixed point with low Temporal Difference (TD) error. Next, building on recent work on linear feature selection, we show conditions under which a linear feature set is sufficient for accurately representing the value function of an option policy. We extend this result to show conditions under which multiple options may be repeatedly composed to create new options with accurate linear models. Finally, we demonstrate linear option learning and planning algorithms in a simulated robot environment.) <|cite_end|> optimized the options by combining existing options. Szepesvari~\emph{et al.} <|cite_start|> (Reference: Universal option models: We consider the problem of learning models of options for real-time abstract planning, in the setting where reward functions can be specified at any time and their expected returns must be efficiently computed. We introduce a new model for an option that is independent of any reward function, called the universal option model (UOM). We prove that the UOM of an option can construct a traditional option model given a reward function, and also supports efficient computation of the option-conditional return. We extend the UOM to linear function approximation, and we show the UOM gives the TD solution of option returns and the value function of a policy over options. We provide a stochastic approximation algorithm for incrementally learning UOMs from data and prove its consistency. We demonstrate our method in two domains. The first domain is a real-time strategy game, where the controller must select the best game unit to accomplish a dynamically-specified task. The second domain is article recommendation, where each user query defines a new reward function and an article's relevance is the expected return from following a policy that follows the citations between articles. Our experiments show that UOMs are substantially more efficient than previously known methods for evaluating option returns and policies over options.) <|cite_end|> optimized the options by using different reward functions.
Bacon~\emph{et al.} <|cite_start|> (Reference: The Option-Critic Architecture: Temporal abstraction is key to scaling up learning and planning in reinforcement learning. While planning with temporally extended actions is well understood, creating such abstractions autonomously from data has remained challenging. We tackle this problem in the framework of options [Sutton, Precup & Singh, 1999; Precup, 2000]. We derive policy gradient theorems for options and propose a new option-critic architecture capable of learning both the internal policies and the termination conditions of options, in tandem with the policy over options, and without the need to provide any additional rewards or subgoals. Experimental results in both discrete and continuous environments showcase the flexibility and efficiency of the framework.) <|cite_end|> combined the option framework with the actor-critic structure and directly adopted the policy gradient algorithm to synchronously optimize multi-level control.
Besides, some studies have used mutual information rewards to drive low-level agents to actively explore the dynamics of actions and environments without environmental rewards <|cite_start|> (Reference: Variational Information Maximisation for Intrinsically Motivated Reinforcement Learning: The mutual information is a core statistical quantity that has applications in all areas of machine learning, whether this is in training of density models over multiple data modalities, in maximising the efficiency of noisy transmission channels, or when learning behaviour policies for exploration by artificial agents. Most learning algorithms that involve optimisation of the mutual information rely on the Blahut-Arimoto algorithm --- an enumerative algorithm with exponential complexity that is not suitable for modern machine learning applications. This paper provides a new approach for scalable optimisation of the mutual information by merging techniques from variational inference and deep learning. We develop our approach by focusing on the problem of intrinsically-motivated learning, where the mutual information forms the definition of a well-known internal drive known as empowerment. Using a variational lower bound on the mutual information, combined with convolutional networks for handling visual input streams, we develop a stochastic optimisation algorithm that allows for scalable information maximisation and empowerment-based reasoning directly from pixels to actions.) <|cite_end|> <|cite_start|> (Reference: Diversity is All You Need: Learning Skills without a Reward Function: Intelligent creatures can explore their environments and learn useful skills without supervision. In this paper, we propose DIAYN ('Diversity is All You Need'), a method for learning useful skills without a reward function. Our proposed method learns skills by maximizing an information theoretic objective using a maximum entropy policy. On a variety of simulated robotic tasks, we show that this simple objective results in the unsupervised emergence of diverse skills, such as walking and jumping. In a number of reinforcement learning benchmark environments, our method is able to learn a skill that solves the benchmark task despite never receiving the true task reward. We show how pretrained skills can provide a good parameter initialization for downstream tasks, and can be composed hierarchically to solve complex, sparse reward tasks. Our results suggest that unsupervised discovery of skills can serve as an effective pretraining mechanism for overcoming challenges of exploration and data efficiency in reinforcement learning.) <|cite_end|> <|cite_start|> (Reference: Reinforcement Learning with Competitive Ensembles of Information-Constrained Primitives: Reinforcement learning agents that operate in diverse and complex environments can benefit from the structured decomposition of their behavior. Often, this is addressed in the context of hierarchical reinforcement learning, where the aim is to decompose a policy into lower-level primitives or options, and a higher-level meta-policy that triggers the appropriate behaviors for a given situation. However, the meta-policy must still produce appropriate decisions in all states. In this work, we propose a policy design that decomposes into primitives, similarly to hierarchical reinforcement learning, but without a high-level meta-policy. Instead, each primitive can decide for themselves whether they wish to act in the current state. We use an information-theoretic mechanism for enabling this decentralized decision: each primitive chooses how much information it needs about the current state to make a decision and the primitive that requests the most information about the current state acts in the world. The primitives are regularized to use as little information as possible, which leads to natural competition and specialization. We experimentally demonstrate that this policy architecture improves over both flat and hierarchical policies in terms of generalization.) <|cite_end|>.
From the perspective of state and goal abstractions, Schaul~\emph{et al.} <|cite_start|> (Reference: Universal Value Function Approximators: Value functions are a core component of reinforcement learning systems. The main idea is to to construct a single function approximator V (s; θ) that estimates the long-term reward from any state s, using parameters θ. In this paper we introduce universal value function approximators (UVFAs) V (s, g; θ) that generalise not just over states s but also over goals g. We develop an efficient technique for supervised learning of UVFAs, by factoring observed values into separate embedding vectors for state and goal, and then learning a mapping from s and g to these factored embedding vectors. We show how this technique may be incorporated into a reinforcement learning algorithm that updates the UVFA solely from observed rewards. Finally, we demonstrate that a UVFA can successfully generalise to previously unseen goals.) <|cite_end|> proposed a universal value function for multi-goal optimization problems, which considered goals along with states. Kulkarni~\emph{et al.} <|cite_start|> (Reference: Hierarchical Deep Reinforcement Learning: Integrating Temporal Abstraction and Intrinsic Motivation: Learning goal-directed behavior in environments with sparse feedback is a major challenge for reinforcement learning algorithms. The primary difficulty arises due to insufficient exploration, resulting in an agent being unable to learn robust value functions. Intrinsically motivated agents can explore new behavior for its own sake rather than to directly solve problems. Such intrinsic behaviors could eventually help the agent solve tasks posed by the environment. We present hierarchical-DQN (h-DQN), a framework to integrate hierarchical value functions, operating at different temporal scales, with intrinsically motivated deep reinforcement learning. A top-level value function learns a policy over intrinsic goals, and a lower-level function learns a policy over atomic actions to satisfy the given goals. h-DQN allows for flexible goal specifications, such as functions over entities and relations. This provides an efficient space for exploration in complicated environments. We demonstrate the strength of our approach on two problems with very sparse, delayed feedback: (1) a complex discrete stochastic decision process, and (2) the classic ATARI game `Montezuma's Revenge'.) <|cite_end|> proposed a temporal abstract HRL, which divided the overall goal into several subgoals and learned an effective global strategy over each subgoal. It showed good adaptability and efficiency in addressing complex RL tasks. Nachum~\emph{et al.} <|cite_start|> (Reference: Data-Efficient Hierarchical Reinforcement Learning: Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional reinforcement learning (RL) methods to solve more complex tasks. Yet, the majority of current HRL methods require careful task-specific design and on-policy training, making them difficult to apply in real-world scenarios. In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control. For generality, we develop a scheme where lower-level controllers are supervised with goals that are learned and proposed automatically by the higher-level controllers. To address efficiency, we propose to use off-policy experience for both higher and lower-level training. This poses a considerable challenge, since changes to the lower-level behaviors change the action space for the higher-level policy, and we introduce an off-policy correction to remedy this challenge. This allows us to take advantage of recent advances in off-policy model-free RL to learn both higher- and lower-level policies using substantially fewer environment interactions than on-policy algorithms. We term the resulting HRL agent HIRO and find that it is generally applicable and highly sample-efficient. Our experiments show that HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations, learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods, we find that our approach substantially outperforms previous state-of-the-art techniques.) <|cite_end|> solved the non-stationary problem between the upper and lower agents in off-policy HRL optimization. Besides, some HRL studies focused on how to learn effective policy based on the idea of hindsight in an environment with sparse rewards. Andrychowicz~\emph{et al.} <|cite_start|> (Reference: Hindsight Experience Replay: Dealing with sparse rewards is one of the biggest challenges in Reinforcement Learning (RL). We present a novel technique called Hindsight Experience Replay which allows sample-efficient learning from rewards which are sparse and binary and therefore avoid the need for complicated reward engineering. It can be combined with an arbitrary off-policy RL algorithm and may be seen as a form of implicit curriculum. We demonstrate our approach on the task of manipulating objects with a robotic arm. In particular, we run experiments on three different tasks: pushing, sliding, and pick-and-place, in each case using only binary rewards indicating whether or not the task is completed. Our ablation studies show that Hindsight Experience Replay is a crucial ingredient which makes training possible in these challenging environments. We show that our policies trained on a physics simulation can be deployed on a physical robot and successfully complete the task.) <|cite_end|> proposed the hindsight experience replay to select efficient goals under sparse rewards. The effective policy was learned on the failed transition by replacing the unreachable goal with the last reached state. The transition would then receive a positive reward. Since the shift of lower-level policy would affect the dynamic of the upper level, non-stationary problems occurred that previously achievable goals could not be achieved. Levy~\emph{et al.} <|cite_start|> (Reference: Learning Multi-Level Hierarchies with Hindsight: Hierarchical agents have the potential to solve sequential decision making tasks with greater sample efficiency than their non-hierarchical counterparts because hierarchical agents can break down tasks into sets of subtasks that only require short sequences of decisions. In order to realize this potential of faster learning, hierarchical agents need to be able to learn their multiple levels of policies in parallel so these simpler subproblems can be solved simultaneously. Yet, learning multiple levels of policies in parallel is hard because it is inherently unstable: changes in a policy at one level of the hierarchy may cause changes in the transition and reward functions at higher levels in the hierarchy, making it difficult to jointly learn multiple levels of policies. In this paper, we introduce a new Hierarchical Reinforcement Learning (HRL) framework, Hierarchical Actor-Critic (HAC), that can overcome the instability issues that arise when agents try to jointly learn multiple levels of policies. The main idea behind HAC is to train each level of the hierarchy independently of the lower levels by training each level as if the lower level policies are already optimal. We demonstrate experimentally in both grid world and simulated robotics domains that our approach can significantly accelerate learning relative to other non-hierarchical and hierarchical methods. Indeed, our framework is the first to successfully learn 3-level hierarchies in parallel in tasks with continuous state and action spaces.) <|cite_end|> assumed that the policy at the lower level was already optimal, so changes in the policy at the lower level would not affect the policy at the upper level. <|paper_end|> | [
"<|reference_start|> Optimal control of execution costs: <|reference_end|>",
"<|reference_start|> Reinforcement learning for optimized trade execution: We present the first large-scale empirical application of reinforcement learning to the important problem of optimized trade execution in modern financial markets. Our experiments are based on 1.5 years of millisecond time-scale limit order data from NASDAQ, and demonstrate the promise of reinforcement learning methods to market microstructure problems. Our learning algorithm introduces and exploits a natural \"low-impact\" factorization of the state space. <|reference_end|>",
"<|reference_start|> An End-to-End Optimal Trade Execution Framework based on Proximal Policy Optimization: In this article, we propose an end-to-end adaptive framework for optimal trade execution based on Proximal Policy Optimization (PPO). We use two methods to account for the time dependencies in the market data based on two different neural network architecture: 1) Long short-term memory (LSTM) networks, 2) Fully-connected networks (FCN) by stacking the most recent limit orderbook (LOB) information as model inputs. The proposed framework can make trade execution decisions based on level-2 limit order book (LOB) information such as bid/ask prices and volumes directly without manually designed attributes as in previous research. Furthermore, we use a sparse reward function, which gives the agent reward signals at the end of each episode as an indicator of its relative performances against the baseline model, rather than implementation shortfall (IS) or a shaped reward function. The experimental results have demonstrated advantages over IS and the shaped reward function in terms of performance and simplicity. The proposed framework has outperformed the industry commonly used baseline models such as TWAP, VWAP, and AC as well as several Deep Reinforcement Learning (DRL) models on most of the 14 US equities in our experiments. <|reference_end|>",
"<|reference_start|> Data-Efficient Hierarchical Reinforcement Learning: Hierarchical reinforcement learning (HRL) is a promising approach to extend traditional reinforcement learning (RL) methods to solve more complex tasks. Yet, the majority of current HRL methods require careful task-specific design and on-policy training, making them difficult to apply in real-world scenarios. In this paper, we study how we can develop HRL algorithms that are general, in that they do not make onerous additional assumptions beyond standard RL algorithms, and efficient, in the sense that they can be used with modest numbers of interaction samples, making them suitable for real-world problems such as robotic control. For generality, we develop a scheme where lower-level controllers are supervised with goals that are learned and proposed automatically by the higher-level controllers. To address efficiency, we propose to use off-policy experience for both higher and lower-level training. This poses a considerable challenge, since changes to the lower-level behaviors change the action space for the higher-level policy, and we introduce an off-policy correction to remedy this challenge. This allows us to take advantage of recent advances in off-policy model-free RL to learn both higher- and lower-level policies using substantially fewer environment interactions than on-policy algorithms. We term the resulting HRL agent HIRO and find that it is generally applicable and highly sample-efficient. Our experiments show that HIRO can be used to learn highly complex behaviors for simulated robots, such as pushing objects and utilizing them to reach target locations, learning from only a few million samples, equivalent to a few days of real-time interaction. In comparisons with a number of prior HRL methods, we find that our approach substantially outperforms previous state-of-the-art techniques. <|reference_end|>"
] | [
16,
24,
35,
50
] | {"<|cite_1|>": "ss-1557847", "<|cite_2|>": "ss-762552", "<|cite_3|>": "ss-1557848", "<|cite_4|>": "ss-1557849", "<|cite_5|>": "ss-762551", "<|cite_6|>": "ss-1557850", "<|cite_7|>": "ss-762552", "<|cite_8|>": "ss-1290686", "<|cite_9|>": "ss-762552", "<|cite_10|>": "arxiv-184730", "<|cite_11|>": "ss-1557851", "<|cite_12|>": "ss-1557852", "<|cite_13|>": "ss-749221", "<|cite_14|>": "arxiv-96389", "<|cite_15|>": "ss-710343", "<|cite_16|>": "arxiv-84365", "<|cite_17|>": "ss-1290682", "<|cite_18|>": "ss-1290683", "<|cite_19|>": "ss-1557853", "<|cite_20|>": "ss-1557847", "<|cite_21|>": "ss-1557848", "<|cite_22|>": "ss-1557849", "<|cite_23|>": "ss-762551", "<|cite_24|>": "ss-1557850", "<|cite_25|>": "ss-1290686", "<|cite_26|>": "ss-1290690", "<|cite_27|>": "ss-749221", "<|cite_28|>": "arxiv-184730", "<|cite_29|>": "ss-1557854", "<|cite_30|>": "ss-762552", "<|cite_31|>": "ss-1290686", "<|cite_32|>": "ss-1290690", "<|cite_33|>": "ss-1557855", "<|cite_34|>": "arxiv-184730", "<|cite_35|>": "ss-1557854", "<|cite_36|>": "ss-762552", "<|cite_37|>": "arxiv-328536", "<|cite_38|>": "ss-1526041", "<|cite_39|>": "arxiv-328536", "<|cite_40|>": "ss-1526041", "<|cite_41|>": "ss-1541317", "<|cite_42|>": "ss-852913", "<|cite_43|>": "ss-1557856", "<|cite_44|>": "ss-1315669", "<|cite_45|>": "arxiv-105952", "<|multi_cite_46_1|>": "arxiv-84772", "<|multi_cite_46_2|>": "arxiv-148666", "<|multi_cite_46_3|>": "arxiv-211455", "<|cite_47|>": "ss-682566", "<|cite_48|>": "arxiv-96389", "<|cite_49|>": "arxiv-159348", "<|cite_50|>": "arxiv-128546", "<|cite_51|>": "arxiv-142116"} |
2011.02044-0 | <|paper_start|> Title: Circuit lower bounds for low-energy states of quantum code Hamiltonians
Abstract: Circuit lower bounds for low-energy states of quantum code Hamiltonians: The No Low-energy Trivial States (NLTS) conjecture of Freedman and Hastings, 2014 -- which posits the existence of a local Hamiltonian with a super-constant quantum circuit lower bound on the complexity of all low-energy states -- identifies a fundamental obstacle to the resolution of the quantum PCP conjecture. In this work, we provide new techniques, based on entropic and local indistinguishability arguments, that prove circuit lower bounds for all the low-energy states of local Hamiltonians arising from quantum error-correcting codes. For local Hamiltonians arising from nearly linear-rate or nearly linear-distance LDPC stabilizer codes, we prove super-constant circuit lower bounds for the complexity of all states of energy o(n). Such codes are known to exist and are not necessarily locally testable, a property previously suspected to be essential for the NLTS conjecture. Curiously, such codes can also be constructed on a two-dimensional lattice, showing that low-depth states cannot accurately approximate the ground-energy even in physically relevant systems.
Introduction
Ground- and low-energy states of local Hamiltonians are the central objects of study in condensed matter physics. The $\QMA$-complete local Hamiltonian problem is also the quantum analog of the $\NP$-complete constraint satisfaction problem (CSP) and ground-states (and low-energy states) of local Hamiltonians correspond to solutions (near optimal solutions) of the problem. A sweeping insight into the computational properties of the low energy spectrum is embodied in the quantum PCP conjecture, which is arguably the most important open question in quantum complexity theory. Just as the classical PCP theorem establishes that CSPs with a promise gap remain $\NP$-complete, the quantum PCP conjecture asserts that local Hamiltonians with a promise gap remain $\QMA$-complete. But despite numerous results providing evidence both for <|cite_start|> (Reference: The detectability lemma and quantum gap amplification: The quantum analogue of the constraint satisfaction problem is the fundamental physics question of finding the minimal energy state of a local Hamiltonian --- each term of the Hamiltonian specifies a local constraint whose violation contributes to the energy of the given quantum state. However, in general it is not meaningful to ask for the probability that a given quantum state violates at least one constraint; the difficulty being that the terms of the Hamiltonian do not commute. We show how to make sense of this notion under mild restrictions on the form of the Hamiltonian. We then provide two main results. We first prove the quantum detectability lemma, which states that the probability of detecting a violation of a constraint in a local Hamiltonian system is bounded from below by some constant times the minimal energy of the system. The proof reveals some intrinsic structure of the Hilbert space of local Hamiltonians, which is captured in the "exponential decay" lemma, and formalized using a novel decomposition of the Hilbert space called the XY decomposition. As an application of the detectability lemma, we prove our second main result: a quantum analogue of the classical gap amplification lemma using random walks over expander graphs, which was the seed for Dinur's celebrated new proof of the PCP theorem [6]. We hope that these results will pave the way to better understandings of the computational properties of local Hamiltonians systems, and to the evolving field of quantum Hamiltonian complexity.) <|cite_end|> <|cite_start|> (Reference: Quantum systems on non-k-hyperfinite complexes: A generalization of classical statistical mechanics on expander graphs: We construct families of cell complexes that generalize expander graphs. These families are called non-k-hyperfinite, generalizing the idea of a non-hyperfinite (NH) family of graphs. Roughly speaking, such a complex has the property that one cannot remove a small fraction of points and be left with an object that looks k - 1-dimensional at large scales. We then consider certain quantum systems on these complexes. A future goal is to construct a family of Hamiltonians such that every low energy state has topological order as part of an attempt to prove the quantum PCP conjecture. This goal is approached by constructing a toric code Hamiltonian with the property that every low energy state without vertex defects has topological order, a property that would not hold for any local system in any lattice Zd or indeed on any 1-hyperfinite complex. Further, such NH complexes find application in quantum coding theory. The hypergraph product codes[1] of Tillich and Zemor are generalized using NH complexes.) <|cite_end|> <|cite_start|> (Reference: Local Hamiltonians whose ground states are hard to approximate: Ground states of local Hamiltonians can be generally highly entangled: any quantum circuit that generates them (even approximately) must be sufficiently deep to allow coupling (entanglement) between any pair of qubits. Until now this property was not known to be robust - the marginals of such states to a subset of the qubits containing all but a small constant fraction of them may be only locally entangled, and hence approximable by shallow quantum circuits. In this work we construct a family of 16-local Hamiltonians for which any 1-10^-8 fraction of qubits of any ground state must be highly entangled.This provides evidence that quantum entanglement is not very fragile, and perhaps our intuition about its instability is an artifact of considering local Hamiltonians which are not only local but spatially local. Formally, it provides positive evidence for two wide-open conjectures in condensed-matter physics and quantum complexity theory which are the qLDPC conjecture, positing the existence of good quantum LDPC codes, and the NLTS conjecture due to Freedman and Hastings positing the existence of local Hamiltonians in which any low-energy state is highly-entangled.Our Hamiltonian is based on applying the hypergraph product by Tillich-Zemor to the repetition code with checks from an expander graph. A key tool in our proof is a new lower bound on the vertex expansion of the output of low-depth quantum circuits, which may be of independent interest.) <|cite_end|> <|cite_start|> (Reference: {Approximate Low-Weight Check Codes and Circuit Lower Bounds for Noisy Ground States: The No Low-Energy Trivial States (NLTS) conjecture of Freedman and Hastings (Quantum Information and Computation 2014), which asserts the existence of local Hamiltonians whose low energy states cannot be generated by constant depth quantum circuits, identifies a fundamental obstacle to resolving the quantum PCP conjecture. Progress towards the NLTS conjecture was made by Eldar and Harrow (Foundations of Computer Science 2017), who proved a closely related theorem called No Low-Error Trivial States (NLETS). In this paper, we give a much simpler proof of the NLETS theorem, and use the same technique to establish superpolynomial circuit size lower bounds for noisy ground states of local Hamiltonians (assuming $\mathsf{QCMA} \neq \mathsf{QMA}$), resolving an open question of Eldar and Harrow. We discuss the new light our results cast on the relationship between NLTS and NLETS.
Finally, our techniques imply the existence of $\textit{approximate quantum low-weight check (qLWC) codes}$ with linear rate, linear distance, and constant weight checks. These codes are similar to quantum LDPC codes except (1) each particle may participate in a large number of checks, and (2) errors only need to be corrected up to fidelity $1 - 1/\mathsf{poly}(n)$. This stands in contrast to the best-known stabilizer LDPC codes due to Freedman, Meyer, and Luo which achieve a distance of $O(\sqrt{n \log n})$.
The principal technique used in our results is to leverage the Feynman-Kitaev clock construction to approximately embed a subspace of states defined by a circuit as the ground space of a local Hamiltonian.) <|cite_end|> <|cite_start|> (Reference: Low-Degree Testing for Quantum States, and a Quantum Entangled Games PCP for QMA: We show that given an explicit description of a multiplayer game, with a classical verifier and a constant number of players, it is QMA-hard, under randomized reductions, to distinguish between the cases when the players have a strategy using entanglement that succeeds with probability 1 in the game, or when no such strategy succeeds with probability larger than 1/2. This proves the "games quantum PCP conjecture" of Fitzsimons and the second author (ITCS'15), albeit under randomized reductions. The core component in our reduction is a construction of a family of two-player games for testing n-qubit maximally entangled states. For any integer n >= 2, we give such a game in which questions from the verifier are O(log n) bits long, and answers are poly(log log n) bits long. We show that for any constant eps >= 0, any strategy that succeeds with probability at least 1-eps in the test must use a state that is within distance δ(eps) = O(eps^c) from a state that is locally equivalent to a maximally entangled state on n qubits, for some universal constant c > 0. The construction is based on the classical plane-vs-point test for multivariate low-degree polynomials of Raz and Safra. We extend the classical test to the quantum regime by executing independent copies of the test in the generalized Pauli X and Z bases over F_q, where q is a sufficiently large prime power, and combine the two through a test for the Pauli twisted commutation relations. Our main complexity-theoretic result is obtained by combining this game with techniques from the classical PCP literature. More specifically, we use constructions of PCPs of proximity introduced by Ben-Sasson et al. (CCC'05), and crucially rely on a linear property of such PCPs. Another consequence of our reduction is a deterministic reduction from the games quantum PCP conjecture to a suitable formulation of the constraint satisfaction quantum PCP conjecture.) <|cite_end|> <|cite_start|> (Reference: Robust quantum entanglement at (nearly) room temperature: We formulate a mixed-state analog of the NLTS conjecture [FH14] by asking whether there exist local Hamiltonians for which the thermal Gibbs state for constant temperature is globally-entangled in the sense that it cannot even be approximated by shallow quantum circuits. We then prove this conjecture holds for nearly optimal parameters: when the "inverse temperature" is almost a constant (temperature decays as 1/loglog(n))) and the Hamiltonian is nearly local (log(n)-local). The construction and proof combine quantum codes that arise from high-dimensional manifolds [Has17, LLZ19], the local-decoding approach to quantum codes [LTZ15, FGL18] and quantum locally-testable codes [AE15].) <|cite_end|>and against <|cite_start|> (Reference: Commutative Version of the Local Hamiltonian Problem and Common Eigenspace Problem: We study the complexity of a problem "Common Eigenspace" -- verifying consistency of eigenvalue equations for composite quantum systems. The input of the problem is a family of pairwise commuting Hermitian operators H1,..., Hr on a Hilbert space (Cd)⊗n and a string of real numbers λ =(λ1,...,λr). The problem is to determine whether the common eigenspace specified by equalities Ha|O〉 = λa|O〉, a = 1, ..., r has a positive dimension. We consider two cases: (i) all operators Ha are k-local; (ii) all operators Ha are factorized. It can be easily shown that both problems belong to the class QMA -- quantum analogue of NP, and that some NP-complete problems can be reduced to either (i) or (ii). A non-trivial question is whether the problems (i) or (ii) belong to NP? We show that the answer is positive for some special values of k and d. Also we prove that the problem (ii) can be reduced to its special case, such that all operators Ha are factorized projectors and all λa = 0.) <|cite_end|> <|cite_start|> (Reference: Product-State Approximations to Quantum Ground States: The local Hamiltonian problem consists of estimating the ground-state energy (given by the minimum eigenvalue) of a local quantum Hamiltonian. It can be considered as a quantum generalization of constraint satisfaction problems (CSPs) and has a key role in quantum complexity theory, being the first and most natural QMA-complete problem known. An interesting regime for the local Hamiltonian problem is that of extensive error, where one is interested in estimating the mean ground-state energy to constant accuracy. The problem is NP-hard by the PCP theorem, but whether it is QMA-hard is an important open question in quantum complexity theory. A positive solution would represent a quantum analogue of the PCP theorem. A key feature that distinguishes quantum Hamiltonians from classical CSPs is that the solutions may involve complicated entangled states. In this paper, we demonstrate several large classes of Hamiltonians for which product (i.e. unentangled) states can approximate the ground state energy to within a small extensive error.
First, we show the mere existence of a good product-state approximation for the ground-state energy of 2-local Hamiltonians with one of more of the following properties: (1) super-constant degree, (2) small expansion, or (3) a ground state with sublinear entanglement with respect to some partition into small pieces. The approximation based on degree is a new and surprising difference between quantum Hamiltonians and classical CSPs, since in the classical setting, higher degree is usually associated with harder CSPs. The approximation based on expansion is not new, but the approximation based on low entanglement was previously known only in the regime where the entanglement was close to zero. Since the existence of a low-energy product state can be checked in NP, this implies that any Hamiltonian used for a quantum PCP theorem should have: (1) constant degree, (2) constant expansion, (3) a ``volume law'' for entanglement with respect to any partition into small parts.
Second, we show that in several cases, good product-state approximations not only exist, but can be found in deterministic polynomial time: (1) 2-local Hamiltonians on any planar graph, solving an open problem of Bansal, Bravyi, and Terhal, (2) dense k-local Hamiltonians for any constant k, solving an open problem of Gharibian and Kempe, and (3) 2-local Hamiltonians on graphs with low threshold rank, via a quantum generalization of a recent result of Barak, Raghavendra and Steurer.
Our work involves two new tools which may be of independent interest. First, we prove a new quantum version of the de Finetti theorem which does not require the usual assumption of symmetry. Second, we describe a way to analyze the application of the Lasserre/Parrilo SDP hierarchy to local quantum Hamiltonians.) <|cite_end|> <|cite_start|> (Reference: The commuting local Hamiltonian problem on locally expanding graphs is approximable in $$\mathsf{{NP}}$$NP: ) <|cite_end|>the quantum PCP conjecture, the problem has remained open for nearly two decades.
The difficulty of the quantum PCP conjecture has motivated a flurry of research beginning with Freedman and Hastings' \emph{No low-energy trivial states (NLTS) conjecture} <|cite_start|> (Reference: Quantum systems on non-k-hyperfinite complexes: A generalization of classical statistical mechanics on expander graphs: We construct families of cell complexes that generalize expander graphs. These families are called non-k-hyperfinite, generalizing the idea of a non-hyperfinite (NH) family of graphs. Roughly speaking, such a complex has the property that one cannot remove a small fraction of points and be left with an object that looks k - 1-dimensional at large scales. We then consider certain quantum systems on these complexes. A future goal is to construct a family of Hamiltonians such that every low energy state has topological order as part of an attempt to prove the quantum PCP conjecture. This goal is approached by constructing a toric code Hamiltonian with the property that every low energy state without vertex defects has topological order, a property that would not hold for any local system in any lattice Zd or indeed on any 1-hyperfinite complex. Further, such NH complexes find application in quantum coding theory. The hypergraph product codes[1] of Tillich and Zemor are generalized using NH complexes.) <|cite_end|>. The NLTS conjecture posits that there exists a fixed constant $\eps > 0$ and a family of $n$ qubit local Hamiltonians such that every state of energy $\leq n\eps$ requires a quantum circuit of super-constant depth to generate. The NLTS conjecture is a necessary consequence of the quantum PCP conjecture because $\QMA$-complete problems do not have $\NP$ solutions and a constant-depth quantum circuit generating a low-energy state would serve as a $\NP$ witness. Thus, this conjecture addresses the inapproximability of local Hamiltonians by classical means.
Proving the NLTS conjecture remains a fundamental obstacle to the resolution the quantum PCP conjecture.
In this work, we show that for local Hamiltonians corresponding to LDPC stabilizer quantum error-correcting codes of linear rate and polynomial distance, every state of energy $\leq \eps n$ requires a quantum circuit of depth $\Omega(\log 1/\eps)$ to generate. We also show similar results for linear distance LDPC codes. Thus, any improvement to our result would resolve the NLTS conjecture.
\subsection{Our results}
We restrict our attention to quantum error-correcting codes and the low-energy states of the associated code Hamiltonians\footnote{The classical analog of this question, the circuit complexity of approximate sampling from the uniform distribution of a classical error-correcting code, is answered by Lovett and Viola <|cite_start|> (Reference: Bounded-Depth Circuits Cannot Sample Good Codes: ) <|cite_end|>.}. A code Hamiltonian is a local Hamiltonian whose ground-space is precisely the code-space, with the additional property that the energy of a state measures the number of violated code checks.
Examples of quantum error-correcting codes realized as the ground-spaces of local Hamiltonians already play a central role in our understanding of the physical phenomenon known as topological order <|cite_start|> (Reference: Fault tolerant quantum computation by anyons: ) <|cite_end|> <|cite_start|> (Reference: Topological order at nonzero temperature: We propose a definition for topological order at nonzero temperature in analogy to the usual zero temperature definition that a state is topologically ordered, or "nontrivial", if it cannot be transformed into a product state (or a state close to a product state) using a local (or approximately local) quantum circuit. We prove that any two-dimensional Hamiltonian which is a sum of commuting local terms is not topologically ordered at T > 0. We show that such trivial states cannot be used to store quantum information using certain stringlike operators. This definition is not too restrictive, however, as the four dimensional toric code does have a nontrivial phase at nonzero temperature.) <|cite_end|>.
Call an error-correcting code an $[[n,k,d]]$ code with locality $\ell$ if it has $n$ physical qubits, $k$ logical qubits, distance $d$ and the corresponding code Hamiltonian has locality $\ell$ (these definitions are made precise in Section \ref{sec:preliminaries}).
Our main result refers to a subclass of codes known as \emph{stabilizer codes} where the code Hamiltonian is commuting and each Hamiltonian term is the tensor product of Pauli operators.
\begin{theorem}
\label{thm:main}
Let $\Cc$ be a $[[n,k,d]]$ stabilizer code of constant locality $\ell=O(1)$ and let $H=\sum_i H_i$ be the corresponding code Hamiltonian with a term $H_i = (\id - C_i)/2$ for each code check $C_i$. For any $\eps>0$ and any state $\psi$ on $n$-qubits with energy $\leq\eps n$, the circuit depth of $\psi$ is at least
\begin{align}
\Omega \left( \min \left\{ \log d, \quad \log {\frac{k + d}{n\sqrt{\eps \log\frac{1}{\eps}}}} \right\} \right).
\end{align}
\end{theorem}
In the case of linear-rate and polynomial-distance codes such as the hypergraph product code of Tillich and Z\'emor <|cite_start|> (Reference: Quantum LDPC Codes With Positive Rate and Minimum Distance Proportional to the Square Root of the Blocklength: The current best asymptotic lower bound on the minimum distance of quantum LDPC codes with a fixed non-zero rate is logarithmic in the blocklength. We propose a construction of quantum LDPC codes with fixed non-zero rate and prove that the minimum distance grows proportionally to the square root of the blocklength.) <|cite_end|>, the theorem proves a circuit lower bound of $\Omega(\delta \log n)$ for any state of energy $O(n^{1-\delta})$. So for fixed $\delta$, say $\delta = 0.01$, it provides a circuit lower bound of $\Omega(\log n)$ for all states of energy $O(n^{0.99})$. Furthermore, it proves a circuit lower bound of $\Omega(\log \log n)$ for any state of energy $O(n/\poly \log n)$ and a super-constant circuit lower bound for any state of energy $o(n)$. {Recent developments <|cite_start|> (Reference: Quantum LDPC Codes with Almost Linear Minimum Distance: We give a construction of quantum LDPC codes of dimension $\Theta(\log N)$ and distance $\Theta(N/\log N)$ as the code length $N\to\infty$. Using a product of chain complexes this construction also provides a family of quantum LDPC codes of distance $\Omega(N^{1-\alpha/2}/\log N)$ and dimension $\Omega(N^\alpha \log N)$, where $0 \le \alpha < 1$. We also introduce and study a new operation called lifted product, which naturally generalizes the product operations for quantum codes and chain complexes. Moreover, as a simple byproduct of our results on quantum codes, we obtain a new result on classical codes. We show that for any fixed $R < 1$ there exists an asymptotically good family of classical quasi-cyclic LDPC codes of rate at least $R$ with, in some sense, optimal circulant size $\Omega(N/\log N)$ as the code length $N\to\infty$.) <|cite_end|> <|cite_start|> (Reference: Fiber Bundle Codes: Breaking the $N^1/2 \operatornamepolylog(N)$ Barrier for Quantum LDPC Codes: We present a quantum LDPC code family that has distance $\Omega(N^{3/5}/\operatorname{polylog}(N))$ and $\tilde\Theta(N^{3/5})$ logical qubits. This is the first quantum LDPC code construction which achieves distance greater than $N^{1/2} \operatorname{polylog}(N)$. The construction is based on generalizing the homological product of codes to a fiber bundle.) <|cite_end|> <|cite_start|> (Reference: Balanced Product Quantum Codes: This work provides the first explicit and non-random family of <inline-formula> <tex-math notation="LaTeX">$[[N,K,D]]$ </tex-math></inline-formula> LDPC quantum codes which encode <inline-formula> <tex-math notation="LaTeX">$K \in \Theta \left({N^{\frac {4}{5}}}\right)$ </tex-math></inline-formula> logical qubits with distance <inline-formula> <tex-math notation="LaTeX">$D \in \Omega \left({N^{\frac {3}{5}}}\right)$ </tex-math></inline-formula>. The family is constructed by amalgamating classical codes and Ramanujan graphs via an operation called <italic>balanced product</italic>. Recently, Hastings–Haah–O’Donnell and Panteleev–Kalachev were the first to show that there exist families of LDPC quantum codes which break the <inline-formula> <tex-math notation="LaTeX">$\mathrm {polylog}(N)\sqrt {N}$ </tex-math></inline-formula> distance barrier. However, their constructions are based on probabilistic arguments which only guarantee the code parameters with high probability whereas our bounds hold unconditionally. Further, balanced products allow for non-abelian twisting of the check matrices, leading to a construction of LDPC quantum codes that can be shown to have <inline-formula> <tex-math notation="LaTeX">$K\in \Theta (N)$ </tex-math></inline-formula> and that we conjecture to have linear distance <inline-formula> <tex-math notation="LaTeX">$D\in \Theta (N)$ </tex-math></inline-formula>.) <|cite_end|>have shown quantum LDPC codes with near-linear distance $d=\Omega(n/\log n)$ (but with low rate $k=O(1)$). The theorem also provides a circuit lower bound of $\Omega(\log n)$ for all states of energy $O(n^{0.99})$ in such codes.}
For ``reasonable'' stabilizer codes of polynomial rate and polynomial distance, this theorem provides a non-trivial lower bound on the circuit complexity in the energy regime of $1/\poly(n)$.
Furthermore, for any stabilizer code of nearly-linear-rate (i.e. $n^{1-\delta}$ rate) and distance at least $n^{\Omega(\delta)}$, the theorem still proves a circuit lower bound of $\Omega(\delta \log n)$ for any state of energy $O(n^{1-2\delta})$. Codes with these properties are known to exist on constant-dimensional lattices and are not locally-testable; one example is the punctured 2D toric code\footnote{The punctured 2D toric code is known to saturate the information-distance tradeoff bound of <|cite_start|> (Reference: Tradeoffs for reliable quantum information storage in 2d systems: We ask whether there are fundamental limits on storing quantum information reliably in a bounded volume of space. To investigate this question, we study quantum error correcting codes specified by geometrically local commuting constraints on a 2D lattice of finite-dimensional quantum particles. For these 2D systems, we derive a tradeoff between the number of encoded qubits k, the distance of the code d, and the number of particles n. It is shown that kd{2}=O(n) where the coefficient in O(n) depends only on the locality of the constraints and dimension of the Hilbert spaces describing individual particles. The analogous tradeoff for the classical information storage is k sqrt[d]=O(n).) <|cite_end|>.} with $O(n^{1-\delta})$ punctures <|cite_start|> (Reference: Tradeoffs for reliable quantum information storage in 2d systems: We ask whether there are fundamental limits on storing quantum information reliably in a bounded volume of space. To investigate this question, we study quantum error correcting codes specified by geometrically local commuting constraints on a 2D lattice of finite-dimensional quantum particles. For these 2D systems, we derive a tradeoff between the number of encoded qubits k, the distance of the code d, and the number of particles n. It is shown that kd{2}=O(n) where the coefficient in O(n) depends only on the locality of the constraints and dimension of the Hilbert spaces describing individual particles. The analogous tradeoff for the classical information storage is k sqrt[d]=O(n).) <|cite_end|> <|cite_start|> (Reference: Surface Codes: Towards Practical Large-scale Quantum Computation: This article provides an introduction to surface code quantum computing. We first estimate the size and speed of a surface code quantum computer. We then introduce the concept of the stabilizer, using two qubits, and extend this concept to stabilizers acting on a two-dimensional array of physical qubits, on which we implement the surface code. We next describe how logical qubits are formed in the surface code array and give numerical estimates of their fault-tolerance. We outline how logical qubits are physically moved on the array, how qubit braid transformations are constructed, and how a braid between two logical qubits is equivalent to a controlled-NOT. We then describe the single-qubit Hadamard, S and T operators, completing the set of required gates for a universal quantum computer. We conclude by briefly discussing physical implementations of the surface code. We include a number of appendices in which we provide supplementary information to the main text.) <|cite_end|>. Additionally, toric codes defined on hyperbolic manifolds where the manifold has constant negative curvature also have linear rate and small (yet polynomial) distance. Examples include the toric code defined on 4-dimensional arithmetic hyperbolic manifolds <|cite_start|> (Reference: Quantum error correcting codes and 4-dimensional arithmetic hyperbolic manifolds: Using 4-dimensional arithmetic hyperbolic manifolds, we construct some new homological quantum error correcting codes. They are low density parity check codes with linear rate and distance ne. Their rate is evaluated via Euler characteristic arguments and their distance using Z2-systolic geometry. This construction answers a question of Zemor [“On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction,” in Proceedings of Second International Workshop on Coding and Cryptology (IWCC), Lecture Notes in Computer Science Vol. 5557 (2009), pp. 259–273], who asked whether homological codes with such parameters could exist at all.) <|cite_end|>or golden codes <|cite_start|> (Reference: Golden codes: quantum LDPC codes built from regular tessellations of hyperbolic 4-manifolds: We adapt a construction of Guth and Lubotzky [arXiv:1310.5555] to obtain a family of quantum LDPC codes with non-vanishing rate and minimum distance scaling like $n^{0.1}$ where $n$ is the number of physical qubits. Similarly as in [arXiv:1310.5555], our homological code family stems from hyperbolic 4-manifolds equipped with tessellations. The main novelty of this work is that we consider a regular tessellation consisting of hypercubes. We exploit this strong local structure to design and analyze an efficient decoding algorithm.) <|cite_end|>, for which our main result will also prove a super-constant circuit lower bound for all states of energy $o(n)$.
\subsection{Challenges and an overview of proof techniques}
Code-states of an error correcting code are well known to have a large circuit complexity $\sim\log d$, where $d$ is the distance of the code. This lower bound arises from the local indistinguishability property (see Fact \ref{fact:local-indistinguishability}), which means that for any size $< d$ subset $S$ of the qubits, the reduced density matrix $\rho_S$ for any code-state $\rho$ is an invariant of the code-space.
A natural notion of approximation to code-states is the class of low-error states. Such states resemble the code-states on a large number of physical qubits, differing arbitrarily on a small fraction (interpreted as an error). Prior works <|cite_start|> (Reference: Local Hamiltonians whose ground states are hard to approximate: Ground states of local Hamiltonians can be generally highly entangled: any quantum circuit that generates them (even approximately) must be sufficiently deep to allow coupling (entanglement) between any pair of qubits. Until now this property was not known to be robust - the marginals of such states to a subset of the qubits containing all but a small constant fraction of them may be only locally entangled, and hence approximable by shallow quantum circuits. In this work we construct a family of 16-local Hamiltonians for which any 1-10^-8 fraction of qubits of any ground state must be highly entangled.This provides evidence that quantum entanglement is not very fragile, and perhaps our intuition about its instability is an artifact of considering local Hamiltonians which are not only local but spatially local. Formally, it provides positive evidence for two wide-open conjectures in condensed-matter physics and quantum complexity theory which are the qLDPC conjecture, positing the existence of good quantum LDPC codes, and the NLTS conjecture due to Freedman and Hastings positing the existence of local Hamiltonians in which any low-energy state is highly-entangled.Our Hamiltonian is based on applying the hypergraph product by Tillich-Zemor to the repetition code with checks from an expander graph. A key tool in our proof is a new lower bound on the vertex expansion of the output of low-depth quantum circuits, which may be of independent interest.) <|cite_end|> <|cite_start|> (Reference: {Approximate Low-Weight Check Codes and Circuit Lower Bounds for Noisy Ground States: The No Low-Energy Trivial States (NLTS) conjecture of Freedman and Hastings (Quantum Information and Computation 2014), which asserts the existence of local Hamiltonians whose low energy states cannot be generated by constant depth quantum circuits, identifies a fundamental obstacle to resolving the quantum PCP conjecture. Progress towards the NLTS conjecture was made by Eldar and Harrow (Foundations of Computer Science 2017), who proved a closely related theorem called No Low-Error Trivial States (NLETS). In this paper, we give a much simpler proof of the NLETS theorem, and use the same technique to establish superpolynomial circuit size lower bounds for noisy ground states of local Hamiltonians (assuming $\mathsf{QCMA} \neq \mathsf{QMA}$), resolving an open question of Eldar and Harrow. We discuss the new light our results cast on the relationship between NLTS and NLETS.
Finally, our techniques imply the existence of $\textit{approximate quantum low-weight check (qLWC) codes}$ with linear rate, linear distance, and constant weight checks. These codes are similar to quantum LDPC codes except (1) each particle may participate in a large number of checks, and (2) errors only need to be corrected up to fidelity $1 - 1/\mathsf{poly}(n)$. This stands in contrast to the best-known stabilizer LDPC codes due to Freedman, Meyer, and Luo which achieve a distance of $O(\sqrt{n \log n})$.
The principal technique used in our results is to leverage the Feynman-Kitaev clock construction to approximately embed a subspace of states defined by a circuit as the ground space of a local Hamiltonian.) <|cite_end|>, exploiting the error-correction property, showed that the low-error states also have a large circuit complexity. This generalized the aforementioned circuit lower bounds on code-states. However, as further demonstrated in <|cite_start|> (Reference: {Approximate Low-Weight Check Codes and Circuit Lower Bounds for Noisy Ground States: The No Low-Energy Trivial States (NLTS) conjecture of Freedman and Hastings (Quantum Information and Computation 2014), which asserts the existence of local Hamiltonians whose low energy states cannot be generated by constant depth quantum circuits, identifies a fundamental obstacle to resolving the quantum PCP conjecture. Progress towards the NLTS conjecture was made by Eldar and Harrow (Foundations of Computer Science 2017), who proved a closely related theorem called No Low-Error Trivial States (NLETS). In this paper, we give a much simpler proof of the NLETS theorem, and use the same technique to establish superpolynomial circuit size lower bounds for noisy ground states of local Hamiltonians (assuming $\mathsf{QCMA} \neq \mathsf{QMA}$), resolving an open question of Eldar and Harrow. We discuss the new light our results cast on the relationship between NLTS and NLETS.
Finally, our techniques imply the existence of $\textit{approximate quantum low-weight check (qLWC) codes}$ with linear rate, linear distance, and constant weight checks. These codes are similar to quantum LDPC codes except (1) each particle may participate in a large number of checks, and (2) errors only need to be corrected up to fidelity $1 - 1/\mathsf{poly}(n)$. This stands in contrast to the best-known stabilizer LDPC codes due to Freedman, Meyer, and Luo which achieve a distance of $O(\sqrt{n \log n})$.
The principal technique used in our results is to leverage the Feynman-Kitaev clock construction to approximately embed a subspace of states defined by a circuit as the ground space of a local Hamiltonian.) <|cite_end|>, low-error is a strictly weaker notion than low-energy . Without invoking highly non-trivial properties such as local testability <|cite_start|> (Reference: Local Hamiltonians whose ground states are hard to approximate: Ground states of local Hamiltonians can be generally highly entangled: any quantum circuit that generates them (even approximately) must be sufficiently deep to allow coupling (entanglement) between any pair of qubits. Until now this property was not known to be robust - the marginals of such states to a subset of the qubits containing all but a small constant fraction of them may be only locally entangled, and hence approximable by shallow quantum circuits. In this work we construct a family of 16-local Hamiltonians for which any 1-10^-8 fraction of qubits of any ground state must be highly entangled.This provides evidence that quantum entanglement is not very fragile, and perhaps our intuition about its instability is an artifact of considering local Hamiltonians which are not only local but spatially local. Formally, it provides positive evidence for two wide-open conjectures in condensed-matter physics and quantum complexity theory which are the qLDPC conjecture, positing the existence of good quantum LDPC codes, and the NLTS conjecture due to Freedman and Hastings positing the existence of local Hamiltonians in which any low-energy state is highly-entangled.Our Hamiltonian is based on applying the hypergraph product by Tillich-Zemor to the repetition code with checks from an expander graph. A key tool in our proof is a new lower bound on the vertex expansion of the output of low-depth quantum circuits, which may be of independent interest.) <|cite_end|>, it seems unclear if the low-energy states can be viewed as low-error. This leads to the central challenge towards the NLTS conjecture: capturing the circuit complexity of the low-energy states. The prior arguments, all of which rely on local indistinguishability (captured by the code distance), do not seem to suffice.
We observe, for the first time, that another parameter plays a key role in circuit lower bounds: the rate of the code. Inspired by <|cite_start|> (Reference: Tradeoffs for reliable quantum information storage in 2d systems: We ask whether there are fundamental limits on storing quantum information reliably in a bounded volume of space. To investigate this question, we study quantum error correcting codes specified by geometrically local commuting constraints on a 2D lattice of finite-dimensional quantum particles. For these 2D systems, we derive a tradeoff between the number of encoded qubits k, the distance of the code d, and the number of particles n. It is shown that kd{2}=O(n) where the coefficient in O(n) depends only on the locality of the constraints and dimension of the Hilbert spaces describing individual particles. The analogous tradeoff for the classical information storage is k sqrt[d]=O(n).) <|cite_end|>, we use novel entropic arguments to prove that states of low circuit complexity are significantly far in $\ell_1-$distance from high rate code-spaces (established in Section \ref{sec:warmup-entropy}). Formally, we show that all states of circuit complexity $\leq \log d$ are at a $\ell_1$-distance of $\geq \Omega(\frac{k^2}{n^2})$ from the code-space\footnote{
This is proved using an information theory argument. Consider a state $\psi$ with small trace distance to the code. Then, the reduced density matrices $\{\psi_S\}$ approximate the reduced density matrices of the closest state of $\Cc$. By local indistinguishability, the $\{\psi_S\}$ in turn approximate the reduced density matrices for all code-states. In particular, they approximate the reduced density matrices of the encoded maximally-mixed state $\Theta$ of the code. This state has entropy $S(\Theta)$ equal to the rate of the code, $k$. We now show that if $\psi$ has low circuit complexity, then the entropy $S(\Theta)$ is bounded.
Assume that $\psi$ is the output of a low-depth circuit $W$, then for any qubit $i$,
\begin{align}
\tr_{-\{i\}}(W^\dagger \psi W) \approx \tr_{-\{i\}}(W^\dagger \Theta W). \label{eq:approxidea}
\end{align}
This is because (a) $\tr_{-L_i}(\psi) \approx \tr_{-L_i}(\Theta)$ where $L_i$ is the support of the lightcone of qubit $i$ with respect to $W$ and (b) the value of the $i$th qubit of a $W$-rotated state only depends on the lightcone of the $i$th qubit. However, the left-hand side of \eqref{eq:approxidea} equals the pure state $\ketbra{0}{0}$ and so the entropy of $\tr_{-\{i\}}(W^\dagger \Theta W)$, the $i$th qubit of $W^\dagger \Theta W$, is small. This gives us an overall bound on the entropy of $W^\dagger\Theta W$ which equals that of $\Theta$ and also upper bounds the rate of the code.
}.
This observation alone does not suffice to address the aforementioned central challenge: the space of low-energy states is much larger than the code-space or even its small neighborhood. A general strategy in earlier works <|cite_start|> (Reference: Local Hamiltonians whose ground states are hard to approximate: Ground states of local Hamiltonians can be generally highly entangled: any quantum circuit that generates them (even approximately) must be sufficiently deep to allow coupling (entanglement) between any pair of qubits. Until now this property was not known to be robust - the marginals of such states to a subset of the qubits containing all but a small constant fraction of them may be only locally entangled, and hence approximable by shallow quantum circuits. In this work we construct a family of 16-local Hamiltonians for which any 1-10^-8 fraction of qubits of any ground state must be highly entangled.This provides evidence that quantum entanglement is not very fragile, and perhaps our intuition about its instability is an artifact of considering local Hamiltonians which are not only local but spatially local. Formally, it provides positive evidence for two wide-open conjectures in condensed-matter physics and quantum complexity theory which are the qLDPC conjecture, positing the existence of good quantum LDPC codes, and the NLTS conjecture due to Freedman and Hastings positing the existence of local Hamiltonians in which any low-energy state is highly-entangled.Our Hamiltonian is based on applying the hypergraph product by Tillich-Zemor to the repetition code with checks from an expander graph. A key tool in our proof is a new lower bound on the vertex expansion of the output of low-depth quantum circuits, which may be of independent interest.) <|cite_end|> <|cite_start|> (Reference: Robust quantum entanglement at (nearly) room temperature: We formulate a mixed-state analog of the NLTS conjecture [FH14] by asking whether there exist local Hamiltonians for which the thermal Gibbs state for constant temperature is globally-entangled in the sense that it cannot even be approximated by shallow quantum circuits. We then prove this conjecture holds for nearly optimal parameters: when the "inverse temperature" is almost a constant (temperature decays as 1/loglog(n))) and the Hamiltonian is nearly local (log(n)-local). The construction and proof combine quantum codes that arise from high-dimensional manifolds [Has17, LLZ19], the local-decoding approach to quantum codes [LTZ15, FGL18] and quantum locally-testable codes [AE15].) <|cite_end|>was to build a low-depth decoding circuit to bring each low-energy state closer to the code-space. But this required assuming that the code was locally testable; such codes are not known to exist in the desired parameter regime. We instead appeal to the observation that every eigenspace of a stabilizer code Hamiltonian possesses the local indistinguishability property (Fact \ref{fact:local-indistinguishability}). Instead of attempting to construct a decoding circuit, we measure the syndrome using a constant-depth circuit (which uses the LDPC nature of the code Hamiltonian). This allows us to decohere the low energy state into a mixture of orthogonal states that live within each of the eigenspaces. A key realization is that measurement of the syndrome for low-energy states is a gentle measurement in that it does not perturb the state locally. This is used to show that a state of low energy satisfies an approximate version of locally indistinguishability. This, coupled with the argument for codes of high rate, completes the proof.
\subsection{Separation of the NLTS conjecture from the QLDPC/QLTC conjectures}
A quantum low-density parity-check (LDPC) code is an error-correcting code with a local Hamiltonian defining the code-space, such that each qubit participates in at most a constant number of Hamiltonian terms and each Hamiltonian term acts on at most a constant number of qubits (i.e. the bipartite interaction matrix has low-density). The QLDPC conjecture posits the existence of LDPC codes that also have linear-rate and linear-distance. It has been previously suspected that a QLDPC property would be necessary for NLTS Hamiltonians <|cite_start|> (Reference: Quantum systems on non-k-hyperfinite complexes: A generalization of classical statistical mechanics on expander graphs: We construct families of cell complexes that generalize expander graphs. These families are called non-k-hyperfinite, generalizing the idea of a non-hyperfinite (NH) family of graphs. Roughly speaking, such a complex has the property that one cannot remove a small fraction of points and be left with an object that looks k - 1-dimensional at large scales. We then consider certain quantum systems on these complexes. A future goal is to construct a family of Hamiltonians such that every low energy state has topological order as part of an attempt to prove the quantum PCP conjecture. This goal is approached by constructing a toric code Hamiltonian with the property that every low energy state without vertex defects has topological order, a property that would not hold for any local system in any lattice Zd or indeed on any 1-hyperfinite complex. Further, such NH complexes find application in quantum coding theory. The hypergraph product codes[1] of Tillich and Zemor are generalized using NH complexes.) <|cite_end|> <|cite_start|> (Reference: Quantum Locally Testable Codes: We initiate the study of quantum Locally Testable Codes (qLTCs). We provide a definition together with a simplification, denoted sLTCs, for the special case of stabilizer codes, together with some basic results using those definitions. The most crucial parameter of such codes is their soundness, $R(\delta)$, namely, the probability that a randomly chosen constraint is violated as a function of the distance of a word from the code ($\delta$, the relative distance from the code, is called the proximity). We then proceed to study limitations on qLTCs. In our first main result we prove a surprising, inherently quantum, property of sLTCs: for small values of proximity, the better the small-set expansion of the interaction graph of the constraints, the less sound the qLTC becomes. This phenomenon, which can be attributed to monogamy of entanglement, stands in sharp contrast to the classical setting. The complementary, more intuitive, result also holds: an upper bound on the soundness when the code is defined on poor small-set expanders (a bound which turns out to be far more difficult to show in the quantum case). Together we arrive at a quantum upper-bound on the soundness of stabilizer qLTCs set on any graph, which does not hold in the classical case. Many open questions are raised regarding what possible parameters are achievable for qLTCs. In the appendix we also define a quantum analogue of PCPs of proximity (PCPPs) and point out that the result of Ben-Sasson et. al. by which PCPPs imply LTCs with related parameters, carries over to the sLTCs. This creates a first link between qLTCs and quantum PCPs.) <|cite_end|> <|cite_start|> (Reference: Quantum Codes from High-Dimensional Manifolds: We construct toric codes on various high-dimensional manifolds. Assuming a conjecture in geometry we find families of quantum CSS stabilizer codes on $N$ qubits with logarithmic weight stabilizers and distance $N^{1-\epsilon}$ for any $\epsilon>0$. The conjecture is that there is a constant $C>0$ such that for any $n$-dimensional torus ${\mathbb T}^n={\mathbb R}^n/\Lambda$, where $\Lambda$ is a lattice, the least volume unoriented $n/2$-dimensional surface (using the Euclidean metric) representing nontrivial homology has volume at least $C^n$ times the volume of the least volume $n/2$-dimensional hyperplane representing nontrivial homology; in fact, it would suffice to have this result for $\Lambda$ an integral lattice with the surface restricted to faces of a cubulation by unit hypercubes. The main technical result is an estimate of Rankin invariants\cite{rankin} for certain random lattices, showing that in a certain sense they are optimal. Additionally, we construct codes with square-root distance, logarithmic weight stabilizers, and inverse polylogarithmic soundness factor (considered as quantum locally testable codes\cite{qltc}). We also provide an short, alternative proof that the shortest vector in the exterior power of a lattice may be non-split\cite{coulangeon}.) <|cite_end|> <|cite_start|> (Reference: Local Hamiltonians whose ground states are hard to approximate: Ground states of local Hamiltonians can be generally highly entangled: any quantum circuit that generates them (even approximately) must be sufficiently deep to allow coupling (entanglement) between any pair of qubits. Until now this property was not known to be robust - the marginals of such states to a subset of the qubits containing all but a small constant fraction of them may be only locally entangled, and hence approximable by shallow quantum circuits. In this work we construct a family of 16-local Hamiltonians for which any 1-10^-8 fraction of qubits of any ground state must be highly entangled.This provides evidence that quantum entanglement is not very fragile, and perhaps our intuition about its instability is an artifact of considering local Hamiltonians which are not only local but spatially local. Formally, it provides positive evidence for two wide-open conjectures in condensed-matter physics and quantum complexity theory which are the qLDPC conjecture, positing the existence of good quantum LDPC codes, and the NLTS conjecture due to Freedman and Hastings positing the existence of local Hamiltonians in which any low-energy state is highly-entangled.Our Hamiltonian is based on applying the hypergraph product by Tillich-Zemor to the repetition code with checks from an expander graph. A key tool in our proof is a new lower bound on the vertex expansion of the output of low-depth quantum circuits, which may be of independent interest.) <|cite_end|> <|cite_start|> (Reference: {Approximate Low-Weight Check Codes and Circuit Lower Bounds for Noisy Ground States: The No Low-Energy Trivial States (NLTS) conjecture of Freedman and Hastings (Quantum Information and Computation 2014), which asserts the existence of local Hamiltonians whose low energy states cannot be generated by constant depth quantum circuits, identifies a fundamental obstacle to resolving the quantum PCP conjecture. Progress towards the NLTS conjecture was made by Eldar and Harrow (Foundations of Computer Science 2017), who proved a closely related theorem called No Low-Error Trivial States (NLETS). In this paper, we give a much simpler proof of the NLETS theorem, and use the same technique to establish superpolynomial circuit size lower bounds for noisy ground states of local Hamiltonians (assuming $\mathsf{QCMA} \neq \mathsf{QMA}$), resolving an open question of Eldar and Harrow. We discuss the new light our results cast on the relationship between NLTS and NLETS.
Finally, our techniques imply the existence of $\textit{approximate quantum low-weight check (qLWC) codes}$ with linear rate, linear distance, and constant weight checks. These codes are similar to quantum LDPC codes except (1) each particle may participate in a large number of checks, and (2) errors only need to be corrected up to fidelity $1 - 1/\mathsf{poly}(n)$. This stands in contrast to the best-known stabilizer LDPC codes due to Freedman, Meyer, and Luo which achieve a distance of $O(\sqrt{n \log n})$.
The principal technique used in our results is to leverage the Feynman-Kitaev clock construction to approximately embed a subspace of states defined by a circuit as the ground space of a local Hamiltonian.) <|cite_end|>. Our result breaks this intuition by showing that lower bound results are achievable even when the distance is a small polynomial; interestingly, it is the rate that needs to be almost linear for our result, a counter-intuitive property. Furthermore, our results show that entanglement persists at energy well past the distance threshold;
a regime where one intuitively expects the stored information to be lost.
Furthermore, it is believed that the QLDPC codes also need to be locally-testable <|cite_start|> (Reference: Quantum Locally Testable Codes: We initiate the study of quantum Locally Testable Codes (qLTCs). We provide a definition together with a simplification, denoted sLTCs, for the special case of stabilizer codes, together with some basic results using those definitions. The most crucial parameter of such codes is their soundness, $R(\delta)$, namely, the probability that a randomly chosen constraint is violated as a function of the distance of a word from the code ($\delta$, the relative distance from the code, is called the proximity). We then proceed to study limitations on qLTCs. In our first main result we prove a surprising, inherently quantum, property of sLTCs: for small values of proximity, the better the small-set expansion of the interaction graph of the constraints, the less sound the qLTC becomes. This phenomenon, which can be attributed to monogamy of entanglement, stands in sharp contrast to the classical setting. The complementary, more intuitive, result also holds: an upper bound on the soundness when the code is defined on poor small-set expanders (a bound which turns out to be far more difficult to show in the quantum case). Together we arrive at a quantum upper-bound on the soundness of stabilizer qLTCs set on any graph, which does not hold in the classical case. Many open questions are raised regarding what possible parameters are achievable for qLTCs. In the appendix we also define a quantum analogue of PCPs of proximity (PCPPs) and point out that the result of Ben-Sasson et. al. by which PCPPs imply LTCs with related parameters, carries over to the sLTCs. This creates a first link between qLTCs and quantum PCPs.) <|cite_end|>for NLTS. This fact is formalized by Eldar and Harrow <|cite_start|> (Reference: Local Hamiltonians whose ground states are hard to approximate: Ground states of local Hamiltonians can be generally highly entangled: any quantum circuit that generates them (even approximately) must be sufficiently deep to allow coupling (entanglement) between any pair of qubits. Until now this property was not known to be robust - the marginals of such states to a subset of the qubits containing all but a small constant fraction of them may be only locally entangled, and hence approximable by shallow quantum circuits. In this work we construct a family of 16-local Hamiltonians for which any 1-10^-8 fraction of qubits of any ground state must be highly entangled.This provides evidence that quantum entanglement is not very fragile, and perhaps our intuition about its instability is an artifact of considering local Hamiltonians which are not only local but spatially local. Formally, it provides positive evidence for two wide-open conjectures in condensed-matter physics and quantum complexity theory which are the qLDPC conjecture, positing the existence of good quantum LDPC codes, and the NLTS conjecture due to Freedman and Hastings positing the existence of local Hamiltonians in which any low-energy state is highly-entangled.Our Hamiltonian is based on applying the hypergraph product by Tillich-Zemor to the repetition code with checks from an expander graph. A key tool in our proof is a new lower bound on the vertex expansion of the output of low-depth quantum circuits, which may be of independent interest.) <|cite_end|>who give a construction of an NLTS Hamiltonian from any locally-testable CSS QLDPC code with constant soundness. Quantum locally testable codes (QLTCs) of constant soundness are not known to exist; the best constructions achieve a soundness factor of $O(1/\poly \log n)$ with a distance of $\Omega(\sqrt{n})$ <|cite_start|> (Reference: Quantum Codes from High-Dimensional Manifolds: We construct toric codes on various high-dimensional manifolds. Assuming a conjecture in geometry we find families of quantum CSS stabilizer codes on $N$ qubits with logarithmic weight stabilizers and distance $N^{1-\epsilon}$ for any $\epsilon>0$. The conjecture is that there is a constant $C>0$ such that for any $n$-dimensional torus ${\mathbb T}^n={\mathbb R}^n/\Lambda$, where $\Lambda$ is a lattice, the least volume unoriented $n/2$-dimensional surface (using the Euclidean metric) representing nontrivial homology has volume at least $C^n$ times the volume of the least volume $n/2$-dimensional hyperplane representing nontrivial homology; in fact, it would suffice to have this result for $\Lambda$ an integral lattice with the surface restricted to faces of a cubulation by unit hypercubes. The main technical result is an estimate of Rankin invariants\cite{rankin} for certain random lattices, showing that in a certain sense they are optimal. Additionally, we construct codes with square-root distance, logarithmic weight stabilizers, and inverse polylogarithmic soundness factor (considered as quantum locally testable codes\cite{qltc}). We also provide an short, alternative proof that the shortest vector in the exterior power of a lattice may be non-split\cite{coulangeon}.) <|cite_end|> <|cite_start|> (Reference: Towards local testability for quantum coding: We introduce the hemicubic codes, a family of quantum codes obtained by associating qubits with the $p$-faces of the $n$-cube (for $n>p$) and stabilizer constraints with faces of dimension $(p\pm1)$. The quantum code obtained by identifying antipodal faces of the resulting complex encodes one logical qubit into $N = 2^{n-p-1} \tbinom{n}{p}$ physical qubits and displays local testability with a soundness of $\Omega(1/\log(N))$ beating the current state-of-the-art of $1/\log^{2}(N)$ due to Hastings. We exploit this local testability to devise an efficient decoding algorithm that corrects arbitrary errors of size less than the minimum distance, up to polylog factors. We then extend this code family by considering the quotient of the $n$-cube by arbitrary linear classical codes of length $n$. We establish the parameters of these generalized hemicubic codes. Interestingly, if the soundness of the hemicubic code could be shown to be constant, similarly to the ordinary $n$-cube, then the generalized hemicubic codes could yield quantum locally testable codes of length not exceeding an exponential or even polynomial function of the code dimension.) <|cite_end|>. Our construction does not require local-testability; in fact, the hypergraph product code <|cite_start|> (Reference: Quantum LDPC Codes With Positive Rate and Minimum Distance Proportional to the Square Root of the Blocklength: The current best asymptotic lower bound on the minimum distance of quantum LDPC codes with a fixed non-zero rate is logarithmic in the blocklength. We propose a construction of quantum LDPC codes with fixed non-zero rate and prove that the minimum distance grows proportionally to the square root of the blocklength.) <|cite_end|>with linear rate and polynomial distance is not locally-testable as there are errors of size $\Omega(\sqrt{n})$ that violate only a single check \cite[page 4]{1911.03069}.
\subsection{Spatially local Hamiltonians}
A key property of an NLTS Hamiltonian is that it cannot live on a lattice of dimension $D$ for a fixed constant $D$ <|cite_start|> (Reference: Guest column: The quantum pcp conjecture: The classical PCP theorem is arguably the most important achievement of classical complexity theory in the past quarter century. In recent years, researchers in quantum computational complexity have tried to identify approaches and develop tools that address the question: does a quantum version of the PCP theorem hold? The story of this study starts with classical complexity and takes unexpected turns providing fascinating vistas on the foundations of quantum mechanics and multipartite entanglement, topology and the so-called phenomenon of topological order, quantum error correction, information theory, and much more; it raises questions that touch upon some of the most fundamental issues at the heart of our understanding of quantum mechanics. At this point, the jury is still out as to whether or not such a theorem holds. This survey aims to provide a snapshot of the status in this ongoing story, tailored to a general theory-of-CS audience.) <|cite_end|>. This is because of a ``cutting'' argument: Let $H$ be a local Hamiltonian in $D$ dimensions and $\Psi$ a ground-state of $H$.
For a fixed constant $\eps$, partition the lattice into $D$ dimensional rectangular chunks so that the side length of each rectangular chunk is $O((D\eps)^{-1/D})$. Let $\rho_i$ be the reduced state of $\Psi$ on a chunk $i$, and $\rho = \bigotimes_i \rho_i$ be a state over all the qubits. It's not hard to check that $\rho$ violates at most a $\eps$-fraction of the terms of $H$ (only the boundary terms of the rectangular division) and yet has circuit complexity at most $\exp(((D \eps)^{-1/D} )^{D}) = O(\exp(1/D\eps)) = O(1)$; so it is not NLTS.
This circuit complexity upper bound can be further improved for the specific case of stabilizer Hamiltonians on a lattice, due to the result of Aaronson and Gottesman <|cite_start|> (Reference: Improved Simulation of Stabilizer Circuits: The Gottesman-Knill theorem says that a stabilizer circuit -- that is, a quantum circuit consisting solely of CNOT, Hadamard, and phase gates -- can be simulated efficiently on a classical computer. This paper improves that theorem in several directions. First, by removing the need for Gaussian elimination, we make the simulation algorithm much faster at the cost of a factor-2 increase in the number of bits needed to represent a state. We have implemented the improved algorithm in a freely-available program called CHP (CNOT-Hadamard-Phase), which can handle thousands of qubits easily. Second, we show that the problem of simulating stabilizer circuits is complete for the classical complexity class ParityL, which means that stabilizer circuits are probably not even universal for classical computation. Third, we give efficient algorithms for computing the inner product between two stabilizer states, putting any n-qubit stabilizer circuit into a "canonical form" that requires at most O(n^2/log n) gates, and other useful tasks. Fourth, we extend our simulation algorithm to circuits acting on mixed states, circuits containing a limited number of non-stabilizer gates, and circuits acting on general tensor-product initial states but containing only a limited number of measurements.) <|cite_end|>. Since the circuit complexity of each chunk is at most logarithmic in its size $O(1/\eps^{1/D})$, the aforementioned quantum state $\rho$ can actually be prepared by a circuit of depth $O(\min (\log n, \log(1/\eps)))$. Note that this holds for any $0<\eps < 1$, not just a constant. Therefore, our lower bound in the case of nearly linear rate and polynomial distance codes (such as the punctured toric code) matches the upper bound -- up to constant factors -- closing the question on the circuit complexity of the approximate ground-states of these codes.
We also highlight that the only known constructions of LDPC stabilizer codes of linear rate and polynomial distance are built from classical expander graphs and therefore cannot live on a lattice of constant dimension $D$. Therefore, our result in Theorem \ref{thm:main} (applied to linear rate codes) conveniently evades this counterexample.
\subsection{The physics perspective}
The crucial role of entanglement in the theory of quantum many-body systems is widely known with some seminal examples including topological phases of matter <|cite_start|> (Reference: Topological entanglement entropy: We formulate a universal characterization of the many-particle quantum entanglement in the ground state of a topologically ordered two-dimensional medium with a mass gap. We consider a disk in the plane, with a smooth boundary of length L, large compared to the correlation length. In the ground state, by tracing out all degrees of freedom in the exterior of the disk, we obtain a marginal density operator rho for the degrees of freedom in the interior. The von Neumann entropy of rho, a measure of the entanglement of the interior and exterior variables, has the form S(rho) = alphaL - gamma + ..., where the ellipsis represents terms that vanish in the limit L --> infinity. We show that - gamma is a universal constant characterizing a global feature of the entanglement in the ground state. Using topological quantum field theory methods, we derive a formula for gamma in terms of properties of the superselection sectors of the medium.) <|cite_end|>and quantum computation with physically realistic systems <|cite_start|> (Reference: A one-way quantum computer: We present a scheme of quantum computation that consists entirely of one-qubit measurements on a particular class of entangled states, the cluster states. The measurements are used to imprint a quantum logic circuit on the state, thereby destroying its entanglement at the same time. Cluster states are thus one-way quantum computers and the measurements form the program.) <|cite_end|> <|cite_start|> (Reference: Measurement-based quantum computation on cluster states: We give a detailed account of the one-way quantum computer, a scheme of quantum computation that consists entirely of one-qubit measurements on a particular class of entangled states, the cluster states. We prove its universality, describe why its underlying computational model is different from the network model of quantum computation, and relate quantum algorithms to mathematical graphs. Further we investigate the scaling of required resources and give a number of examples for circuits of practical interest such as the circuit for quantum Fourier transformation and for the quantum adder. Finally, we describe computation with clusters of finite size.) <|cite_end|>. But entanglement also brings new challenges as the classical simulation of realistic many-body systems faces serious computational overheads.
Estimating the ground-energy of such systems is one of the major problems in condensed matter physics <|cite_start|> (Reference: {Density matrix formulation for quantum renormalization
groups: A generalization of the numerical renormalization-group procedure used first by Wilson for the Kondo problem is presented. It is shown that this formulation is optimal in a certain sense. As a demonstration of the effectiveness of this approach, results from numerical real-space renormalization-group calculations for Heisenberg chains are presented.) <|cite_end|>, quantum chemistry <|cite_start|> (Reference: Quantum Chemistry in the Age of Quantum Computing: Practical challenges in simulating quantum systems on classical computers have been widely recognized in the quantum physics and quantum chemistry communities over the past century. Although many approximation methods have been introduced, the complexity of quantum mechanics remains hard to appease. The advent of quantum computation brings new pathways to navigate this challenging and complex landscape. By manipulating quantum states of matter and taking advantage of their unique features such as superposition and entanglement, quantum computers promise to efficiently deliver accurate results for many important problems in quantum chemistry, such as the electronic structure of molecules. In the past two decades, significant advances have been made in developing algorithms and physical hardware for quantum computing, heralding a revolution in simulation of quantum systems. This Review provides an overview of the algorithms and results that are relevant for quantum chemistry. The intended audience is both quantum chemists who seek to learn more about quantum computing and quantum computing researchers who would like to explore applications in quantum chemistry.) <|cite_end|>, and quantum annealing <|cite_start|> (Reference: Quantum stochastic optimization: ) <|cite_end|> <|cite_start|> (Reference: Quantum annealing in the transverse {{Ising}} model: In this work we perform numerical simulations of a quantum annealing procedure to find the ground state of a target Hamiltonian. By using this technique one starts from an Ising Hamiltonian with a transverse field that forces the spins to point in the x direction. The field is removed slowly with time, the purpose of this is to carry an adiabatic transition from the transverse-field Hamiltonian to the target Hamiltonian. In order to prove the functionality of quantum annealing we will introduce two terms to quantify the success of the protocol.) <|cite_end|>. One of the key methods to address this problem is to construct \emph{ansatz quantum states} that achieve as low-energy as possible and are also suitable for numerical simulations. A leading ansatz, used in Variational Quantum Eigensolvers <|cite_start|> (Reference: A variational eigenvalue solver on a photonic quantum processor: ) <|cite_end|> <|cite_start|> (Reference: Variational quantum state diagonalization: ) <|cite_end|> <|cite_start|> (Reference: Quantum Chemistry in the Age of Quantum Computing: Practical challenges in simulating quantum systems on classical computers have been widely recognized in the quantum physics and quantum chemistry communities over the past century. Although many approximation methods have been introduced, the complexity of quantum mechanics remains hard to appease. The advent of quantum computation brings new pathways to navigate this challenging and complex landscape. By manipulating quantum states of matter and taking advantage of their unique features such as superposition and entanglement, quantum computers promise to efficiently deliver accurate results for many important problems in quantum chemistry, such as the electronic structure of molecules. In the past two decades, significant advances have been made in developing algorithms and physical hardware for quantum computing, heralding a revolution in simulation of quantum systems. This Review provides an overview of the algorithms and results that are relevant for quantum chemistry. The intended audience is both quantum chemists who seek to learn more about quantum computing and quantum computing researchers who would like to explore applications in quantum chemistry.) <|cite_end|>or Quantum Adiabatic Optimization Algorithm <|cite_start|> (Reference: {A quantum approximate optimization algorithm: We introduce a quantum algorithm that produces approximate solutions for combinatorial optimization problems. The algorithm depends on an integer p ≥ 1 and the quality of the approximation improves as p is increased. The quantum circuit that implements the algorithm consists of unitary gates whose locality is at most the locality of the objective function whose optimum is sought. The depth of the circuit grows linearly with p times (at worst) the number of constraints. If p is fixed, that is, independent of the input size, the algorithm makes use of efficient classical pre-processing. If p grows with the input size a different strategy is proposed. We study the algorithm as applied to MaxCut on regular graphs and analyze its performance on 2-regular and 3-regular graphs for fixed p . For p = 1, on 3-regular graphs the quantum algorithm always finds a cut that is at least 0.6924 times the size of the optimal cut.) <|cite_end|>, is precisely the class of quantum states that can be generated by low-depth quantum circuits.
Theorem \ref{thm:main} shows that there are Hamiltonians for which any constant-depth ansatz cannot estimate their ground-energies beyond a fairly large threshold. As discussed earlier, we provide examples even in the physically realistic two-dimensional setting. For example, the 2D punctured toric code Hamiltonians on $n$ qubits with distance $d$ (which is a free parameter) requires a circuit of depth $\Omega(\log d)$ for an approximation to ground-energy better than $O(n/d^3)$.
\subsection{Prior Results}
To the best of our knowledge, prior to this result, a circuit lower bound on the complexity of \emph{all} low-energy states was only known for states of energy $O(n^{-2})$. This result follows from the $\QMA$-completeness of the local Hamiltonian problem with a promise gap of $O(n^{-2})$ (assuming $\NP \neq \QMA$); the original proof of Kitaev had a promise gap of $O(n^{-3})$ <|cite_start|> (Reference: {Classical and Quantum Computation: Introduction Classical computation Quantum computation Solutions Elementary number theory Bibliography Index.) <|cite_end|> <|cite_start|> (Reference: The Complexity of the Local Hamiltonian Problem: The k-local Hamiltonian problem is a natural complete problem for the complexity class QMA, the quantum analog of NP. It is similar in spirit to MAX-k-SAT, which is NP-complete for k<=2. It was known that the problem is QMA-complete for any k <= 3. On the other hand 1-local Hamiltonian is in P, and hence not believed to be QMA-complete. The complexity of the 2-local Hamiltonian problem has long been outstanding. Here we settle the question and show that it is QMA-complete. We provide two independent proofs; our first proof uses only elementary linear algebra. Our second proof uses a powerful technique for analyzing the sum of two Hamiltonians; this technique is based on perturbation theory and we believe that it might prove useful elsewhere. Using our techniques we also show that adiabatic computation with two-local interactions on qubits is equivalent to standard quantum computation.) <|cite_end|>which was improved by | [
"<|reference_start|> {Approximate Low-Weight Check Codes and Circuit Lower Bounds for Noisy Ground States: The No Low-Energy Trivial States (NLTS) conjecture of Freedman and Hastings (Quantum Information and Computation 2014), which asserts the existence of local Hamiltonians whose low energy states cannot be generated by constant depth quantum circuits, identifies a fundamental obstacle to resolving the quantum PCP conjecture. Progress towards the NLTS conjecture was made by Eldar and Harrow (Foundations of Computer Science 2017), who proved a closely related theorem called No Low-Error Trivial States (NLETS). In this paper, we give a much simpler proof of the NLETS theorem, and use the same technique to establish superpolynomial circuit size lower bounds for noisy ground states of local Hamiltonians (assuming $\\mathsf{QCMA} \\neq \\mathsf{QMA}$), resolving an open question of Eldar and Harrow. We discuss the new light our results cast on the relationship between NLTS and NLETS. \nFinally, our techniques imply the existence of $\\textit{approximate quantum low-weight check (qLWC) codes}$ with linear rate, linear distance, and constant weight checks. These codes are similar to quantum LDPC codes except (1) each particle may participate in a large number of checks, and (2) errors only need to be corrected up to fidelity $1 - 1/\\mathsf{poly}(n)$. This stands in contrast to the best-known stabilizer LDPC codes due to Freedman, Meyer, and Luo which achieve a distance of $O(\\sqrt{n \\log n})$. \nThe principal technique used in our results is to leverage the Feynman-Kitaev clock construction to approximately embed a subspace of states defined by a circuit as the ground space of a local Hamiltonian. <|reference_end|>",
"<|reference_start|> Quantum LDPC Codes with Almost Linear Minimum Distance: We give a construction of quantum LDPC codes of dimension $\\Theta(\\log N)$ and distance $\\Theta(N/\\log N)$ as the code length $N\\to\\infty$. Using a product of chain complexes this construction also provides a family of quantum LDPC codes of distance $\\Omega(N^{1-\\alpha/2}/\\log N)$ and dimension $\\Omega(N^\\alpha \\log N)$, where $0 \\le \\alpha < 1$. We also introduce and study a new operation called lifted product, which naturally generalizes the product operations for quantum codes and chain complexes. Moreover, as a simple byproduct of our results on quantum codes, we obtain a new result on classical codes. We show that for any fixed $R < 1$ there exists an asymptotically good family of classical quasi-cyclic LDPC codes of rate at least $R$ with, in some sense, optimal circulant size $\\Omega(N/\\log N)$ as the code length $N\\to\\infty$. <|reference_end|>",
"<|reference_start|> Tradeoffs for reliable quantum information storage in 2d systems: We ask whether there are fundamental limits on storing quantum information reliably in a bounded volume of space. To investigate this question, we study quantum error correcting codes specified by geometrically local commuting constraints on a 2D lattice of finite-dimensional quantum particles. For these 2D systems, we derive a tradeoff between the number of encoded qubits k, the distance of the code d, and the number of particles n. It is shown that kd{2}=O(n) where the coefficient in O(n) depends only on the locality of the constraints and dimension of the Hilbert spaces describing individual particles. The analogous tradeoff for the classical information storage is k sqrt[d]=O(n). <|reference_end|>",
"<|reference_start|> Towards local testability for quantum coding: We introduce the hemicubic codes, a family of quantum codes obtained by associating qubits with the $p$-faces of the $n$-cube (for $n>p$) and stabilizer constraints with faces of dimension $(p\\pm1)$. The quantum code obtained by identifying antipodal faces of the resulting complex encodes one logical qubit into $N = 2^{n-p-1} \\tbinom{n}{p}$ physical qubits and displays local testability with a soundness of $\\Omega(1/\\log(N))$ beating the current state-of-the-art of $1/\\log^{2}(N)$ due to Hastings. We exploit this local testability to devise an efficient decoding algorithm that corrects arbitrary errors of size less than the minimum distance, up to polylog factors. We then extend this code family by considering the quotient of the $n$-cube by arbitrary linear classical codes of length $n$. We establish the parameters of these generalized hemicubic codes. Interestingly, if the soundness of the hemicubic code could be shown to be constant, similarly to the ordinary $n$-cube, then the generalized hemicubic codes could yield quantum locally testable codes of length not exceeding an exponential or even polynomial function of the code dimension. <|reference_end|>"
] | [
3,
14,
18,
37
] | {"<|multi_cite_1_1|>": "ss-1369908", "<|multi_cite_1_2|>": "ss-839944", "<|multi_cite_1_3|>": "ss-2466726", "<|multi_cite_1_4|>": "ss-1477186", "<|multi_cite_1_5|>": "ss-2186256", "<|multi_cite_1_6|>": "ss-1309478", "<|multi_cite_2_1|>": "ss-1385093", "<|multi_cite_2_2|>": "ss-1385094", "<|multi_cite_2_3|>": "ss-1068138", "<|cite_3|>": "ss-839944", "<|cite_4|>": "ss-2129153", "<|multi_cite_5_1|>": "ss-877572", "<|multi_cite_5_2|>": "ss-1309479", "<|cite_6|>": "ss-1080835", "<|multi_cite_7_1|>": "arxiv-308484", "<|multi_cite_7_2|>": "arxiv-288845", "<|multi_cite_7_3|>": "ss-843873", "<|cite_8|>": "ss-971347", "<|multi_cite_9_1|>": "ss-971347", "<|multi_cite_9_2|>": "ss-1032541", "<|cite_10|>": "ss-1435737", "<|cite_11|>": "arxiv-143869", "<|multi_cite_12_1|>": "ss-2466726", "<|multi_cite_12_2|>": "ss-1477186", "<|cite_13|>": "ss-1477186", "<|cite_14|>": "ss-2466726", "<|cite_15|>": "ss-971347", "<|multi_cite_16_1|>": "ss-2466726", "<|multi_cite_16_2|>": "ss-1309478", "<|multi_cite_17_1|>": "ss-839944", "<|multi_cite_17_2|>": "arxiv-51783", "<|multi_cite_17_3|>": "ss-950879", "<|multi_cite_17_4|>": "ss-2466726", "<|multi_cite_17_5|>": "ss-1477186", "<|cite_18|>": "arxiv-51783", "<|cite_19|>": "ss-2466726", "<|multi_cite_20_1|>": "ss-950879", "<|multi_cite_20_2|>": "arxiv-233020", "<|cite_21|>": "ss-1080835", "<|cite_22|>": "ss-1309480", "<|cite_23|>": "arxiv-677271", "<|cite_24|>": "ss-1309481", "<|multi_cite_25_1|>": "ss-1048452", "<|multi_cite_25_2|>": "ss-826141", "<|cite_26|>": "ss-1068136", "<|cite_27|>": "ss-815456", "<|multi_cite_28_1|>": "ss-1090356", "<|multi_cite_28_2|>": "ss-832088", "<|multi_cite_29_1|>": "ss-1032542", "<|multi_cite_29_2|>": "ss-1524385", "<|multi_cite_29_3|>": "ss-815456", "<|cite_30|>": "ss-765879", "<|multi_cite_31_1|>": "ss-839941", "<|multi_cite_31_2|>": "arxiv-677270", "<|multi_cite_32_1|>": "ss-1309482", "<|multi_cite_32_2|>": "ss-1208422", "<|cite_33|>": "ss-2466726", "<|multi_cite_34_1|>": "arxiv-288845", "<|multi_cite_34_2|>": "arxiv-308484", "<|cite_35|>": "ss-839944", "<|cite_36|>": "ss-2466726", "<|multi_cite_37_1|>": "ss-2466726", "<|multi_cite_37_2|>": "ss-1477186", "<|cite_38|>": "ss-1309478", "<|cite_39|>": "ss-1309483", "<|cite_40|>": "ss-1010809", "<|cite_41|>": "ss-1369908", "<|cite_42|>": "ss-2466726"} |
2310.02528 | <|paper_start|> Title: On the Cognition of Visual Question Answering Models and Human Intelligence: A Comparative Study
Abstract: On the Cognition of Visual Question Answering Models and Human Intelligence: A Comparative Study: Visual Question Answering (VQA) is a challenging task that requires cross-modal understanding and reasoning of visual image and natural language question. To inspect the association of VQA models to human cognition, we designed a survey to record human thinking process and analyzed VQA models by comparing the outputs and attention maps with those of humans. We found that although the VQA models resemble human cognition in architecture and performs similarly with human on the recognition-level, they still struggle with cognitive inferences. The analysis of human thinking procedure serves to direct future research and introduce more cognitive capacity into modeling features and architectures.
Introduction
The task of Visual Question Answering (VQA) leverages both the visual and the textual analysis to answer a free-form, open-ended, natural-language question with an image <|cite_start|> (Reference: VQA: Visual Question Answering: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).) <|cite_end|>. The fields of computer vision and natural language processing have attained much success to reach almost human-level performance separately, such as YOLOR in object recognition <|cite_start|> (Reference: You Only Learn One Representation: Unified Network for Multiple Tasks: People ``understand'' the world via vision, hearing, tactile, and also the past experience. Human experience can be learned through normal learning (we call it explicit knowledge), or subconsciously (we call it implicit knowledge). These experiences learned through normal learning or subconsciously will be encoded and stored in the brain. Using these abundant experience as a huge database, human beings can effectively process data, even they were unseen beforehand. In this paper, we propose a unified network to encode implicit knowledge and explicit knowledge together, just like the human brain can learn knowledge from normal learning as well as subconsciousness learning. The unified network can generate a unified representation to simultaneously serve various tasks. We can perform kernel space alignment, prediction refinement, and multi-task learning in a convolutional neural network. The results demonstrate that when implicit knowledge is introduced into the neural network, it benefits the performance of all tasks. We further analyze the implicit representation learnt from the proposed unified network, and it shows great capability on catching the physical meaning of different tasks. The source code of this work is at : https://github.com/WongKinYiu/yolor.) <|cite_end|> and RoBerta on the GLUE benchmark <|cite_start|> (Reference: RoBERTa: A Robustly Optimized BERT Pretraining Approach: Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code.) <|cite_end|>. Still, combining them together to solve multi-discipline problems such as VQA is yet a challenging task. Furthermore, such a combination may not only cost much more computational power but also require the development of new features and model architectures. A VQA model needs to extract the visual and textual representations as well as their cross-modal interactions to inform the final answer. Most importantly, the VQA task might need commonsense reasoning and semantic knowledge that are natural to human cognitive abilities. A simple question for humans to answer with their specific cognitive abilities might be difficult for the machine.
In pursuit of the cognitive plausibility, we are interested in the association of VQA models to actual human cognition and perception from three perspectives: (1) model architecture inspired by the human thought process, (2) representation and collaborative understanding of visual and textual information, and (3) prediction results and their rationales. Based on our hypothesis of the human question answering process, we train a VQA baseline model and test a state-of-the-art (SOTA) model to predict answers for a given image and question pair, and develop a survey to collect human thought processes on a list of representative questions. By comparing outputs and attention maps from the baseline and SOTA model to those from humans, we are able to assess whether the deep learning approaches are robust resemblances of human results and possibly human thinking processes to identify targets and answer questions, though we expect them to fail to capture some semantic information and rationales that are key to human reasoning. Our findings reveal some similarities and differences between human reasoning and the VQA models. Moreover, they enlightened us with new research directions to augment deep learning model designs by creating new model architectures and training processes more intuitive to human cognition and developing thoughts in performing more complicated, multi-modal tasks.
Related Work
\subsection{VQA and Datasets}
Recent studies on Visual Question Answering (VQA) have an emergent need for an accurate and comprehensive dataset. Most VQA datasets are built upon Microsoft Common Objects in Context (MS COCO), a widely used vision-textual dataset curated for object recognition in the context of scene understanding with abundant varieties <|cite_start|> (Reference: Microsoft COCO: Common Objects in Context: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model.) <|cite_end|>. VQA 1.0 is a widely used dataset, which consists of the visual part of real images from MS COCO and abstract, animated scenes, as well as questions (often several for one image) and answers from human annotators <|cite_start|> (Reference: VQA: Visual Question Answering: We propose the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image, the task is to provide an accurate natural language answer. Mirroring real-world scenarios, such as helping the visually impaired, both the questions and answers are open-ended. Visual questions selectively target different areas of an image, including background details and underlying context. As a result, a system that succeeds at VQA typically needs a more detailed understanding of the image and complex reasoning than a system producing generic image captions. Moreover, VQA is amenable to automatic evaluation, since many open-ended answers contain only a few words or a closed set of answers that can be provided in a multiple-choice format. We provide a dataset containing ~0.25M images, ~0.76M questions, and ~10M answers (www.visualqa.org), and discuss the information it provides. Numerous baselines and methods for VQA are provided and compared with human performance. Our VQA demo is available on CloudCV (http://cloudcv.org/vqa).) <|cite_end|>. It contains 22 types of questions, which covers most general cases of real-life visual question answering scenarios. There is also a VQA v2.0 dataset <|cite_start|> (Reference: Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering: Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at www.visualqa.org as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users.) <|cite_end|>, which improves upon the original VQA 1.0 dataset with its correction of imbalanced answers, where the number of answers to a question is often skewed. The balancing procedure is that for an image-question-answer triplet ($I, Q, A$), human annotators are asked to identify a different but similar image $I'$ such that the answer $A'$ to the same question $Q$ is different. With such an annotation system, the dataset will have a more even distribution of answers for each type of question.
\subsection{VQA Model Architectures}
The recent development of Visual Question Answering models is built upon the maturity of visual and textual embedding models. The high-level concept of the VQA system is to integrate the visual and textual representations of inputs. Generally, there are four directions of development, as we will discuss here.
\vspace{-.3cm}
\paragraph{Joint Embedding} Joint embedding creates embeddings for images and questions and jointly trains them into a unified visual-textual representation. Most models utilize pre-trained visual models such as VGG-Net <|cite_start|> (Reference: Very Deep Convolutional Networks for Large-Scale Image Recognition: In this work we investigate the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting. Our main contribution is a thorough evaluation of networks of increasing depth using an architecture with very small (3x3) convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers. These findings were the basis of our ImageNet Challenge 2014 submission, where our team secured the first and the second places in the localisation and classification tracks respectively. We also show that our representations generalise well to other datasets, where they achieve state-of-the-art results. We have made our two best-performing ConvNet models publicly available to facilitate further research on the use of deep visual representations in computer vision.) <|cite_end|> and ResNet <|cite_start|> (Reference: Deep Residual Learning for Image Recognition: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.) <|cite_end|>. These models are trained to have a universal visual representation that suits the need of general VQA problems <|cite_start|> (Reference: Learning to Reason: End-to-End Module Networks for Visual Question Answering: Natural language questions are inherently compositional, and many are most easily answered by reasoning about their decomposition into modular sub-problems. For example, to answer "is there an equal number of balls and boxes?" we can look for balls, look for boxes, count them, and compare the results. The recently proposed Neural Module Network (NMN) architecture implements this approach to question answering by parsing questions into linguistic substructures and assembling question-specific deep networks from smaller modules that each solve one subtask. However, existing NMN implementations rely on brittle off-the-shelf parsers, and are restricted to the module configurations proposed by these parsers rather than learning them from data. In this paper, we propose End-to-End Module Networks (N2NMNs), which learn to reason by directly predicting instance-specific network layouts without the aid of a parser. Our model learns to generate network structures (by imitating expert demonstrations) while simultaneously learning network parameters (using the downstream task loss). Experimental results on the new CLEVR dataset targeted at compositional question answering show that N2NMNs achieve an error reduction of nearly 50% relative to state-of-the-art attentional approaches, while discovering interpretable network architectures specialized for each question.) <|cite_end|> <|cite_start|> (Reference: Multimodal Convolutional Neural Networks for Matching Image and Sentence: In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence. Our m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition, and the matching relations between the two modalities. More specifically, it consists of one image CNN encoding the image content, and one matching CNN learning the joint representation of image and sentence. The matching CNN composes words to different semantic fragments and learns the inter-modal relations between image and the composed fragments at different levels, thus fully exploit the matching relations between image and sentence. Experimental results on benchmark databases of bidirectional image and sentence retrieval demonstrate that the proposed m-CNNs can effectively capture the information necessary for image and sentence matching. Specifically, our proposed m-CNNs for bidirectional image and sentence retrieval on Flickr30K and Microsoft COCO databases achieve the state-of-the-art performances.) <|cite_end|>. More recent studies have incorporated transformer architectures for image and text embeddings <|cite_start|> (Reference: Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks: Large-scale pre-training methods of learning cross-modal representations on image-text pairs are becoming popular for vision-language tasks. While existing methods simply concatenate image region features and text features as input to the model to be pre-trained and use self-attention to learn image-text semantic alignments in a brute force manner, in this paper, we propose a new learning method Oscar (Object-Semantics Aligned Pre-training), which uses object tags detected in images as anchor points to significantly ease the learning of alignments. Our method is motivated by the observation that the salient objects in an image can be accurately detected, and are often mentioned in the paired text. We pre-train an Oscar model on the public corpus of 6.5 million text-image pairs, and fine-tune it on downstream tasks, creating new state-of-the-arts on six well-established vision-language understanding and generation tasks.) <|cite_end|> <|cite_start|> (Reference: VLMo: Unified Vision-Language Pre-Training with Mixture-of-Modality-Experts: We present a unified Vision-Language pretrained Model (VLMo) that jointly learns a dual encoder and a fusion encoder with a modular Transformer network. Specifically, we introduce Mixture-of-Modality-Experts (MoME) Transformer, where each block contains a pool of modality-specific experts and a shared self-attention layer. Because of the modeling flexibility of MoME, pretrained VLMo can be fine-tuned as a fusion encoder for vision-language classification tasks, or used as a dual encoder for efficient image-text retrieval. Moreover, we propose a stagewise pre-training strategy, which effectively leverages large-scale image-only and text-only data besides image-text pairs. Experimental results show that VLMo achieves state-of-the-art results on various vision-language tasks, including VQA, NLVR2 and image-text retrieval. The code and pretrained models are available at https://aka.ms/vlmo.) <|cite_end|>. A variety of language models are used for textual representation. \citeA{ren2015faster} proposed the R-CNN architecture which uses VGG-Net and LSTM for object detection and downstream VQA tasks. \citeA{garderes2020conceptbert} proposed ConceptBert which utilizes BERT for textual embedding. To combine the two embeddings, multi-layer perceptrons are generally used. For transformer-based models, the transformer architecture will combine image and text into a unified representation.
\vspace{-.3cm}
\paragraph{Attention mechanism} Joint embedding is limited such that image representation is global without a more fine-grained and relevant focus on the text. The aim of the attention mechanism is to find local features in the context of questions to further enhance a coherent representation of the two embeddings. A simple attention mechanism with the LSTM model is used by \citeA{zhu2016visual7w}. Further, <|cite_start|> (Reference: Ask, Attend and Answer: Exploring Question-Guided Spatial Attention for Visual Question Answering: We address the problem of Visual Question Answering (VQA), which requires joint image and language understanding to answer a question about a given photograph. Recent approaches have applied deep image captioning methods based on convolutional-recurrent networks to this problem, but have failed to model spatial inference. To remedy this, we propose a model we call the Spatial Memory Network and apply it to the VQA task. Memory networks are recurrent neural networks with an explicit attention mechanism that selects certain parts of the information stored in memory. Our Spatial Memory Network stores neuron activations from different spatial regions of the image in its memory, and uses the question to choose relevant regions for computing the answer, a process of which constitutes a single "hop" in the network. We propose a novel spatial attention architecture that aligns words with image patches in the first hop, and obtain improved results by adding a second attention hop which considers the whole question to choose visual evidence based on the results of the first hop. To better understand the inference process learned by the network, we design synthetic questions that specifically require spatial inference and visualize the attention weights. We evaluate our model on two published visual question answering datasets, DAQUAR [1] and VQA [2], and obtain improved results compared to a strong deep baseline model (iBOWIMG) which concatenates image and question features to predict the answer [3].) <|cite_end|> proposed a multi-hop image attention scheme by employing attention on both word and question levels.
\vspace{-.3cm}
\paragraph{Compositional structure} Considering the two aforementioned approaches where monolithic representations for visual and textual inputs are modeled respectively, this approach, motivated by the categorical nature of questions in the VQA paradigm, aggregates models with a systematic selection of a sub-model by the context of model outputs. In this sense, the model is designed to have sub-models that can be fine-tuned to better serve the specific domain of one type of question, and the model will automatically select a sub-model for final output. \citeA{noh2016image} proposed a model with dynamic memory networks (DMN) to perform multiple passes, then using a joint loss over each of the passes.
\vspace{-.3cm}
\paragraph{Knowledge base} Intuitively, human perception involves abundant and pre-learned ``common sense'' and factual knowledge and uses such knowledge for specific tasks. The previously discussed approaches can only learn knowledge from the training set, which cannot achieve coverage for all real-life cases. This notion of transfer learning should conceptually help the model better analyze the inputs. Recent developments of large-scale knowledge bases, such as ConceptNet <|cite_start|> (Reference: Conceptnet 5.5: An Open Multilingual Graph of General Knowledge: Machine learning about language can be improved by supplying it with specific knowledge and sources of external information. We present here a new version of the linked open data resource ConceptNet that is particularly well suited to be used with modern NLP techniques such as word embeddings. ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources that include expert-created resources, crowd-sourcing, and games with a purpose. It is designed to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use. When ConceptNet is combined with word embeddings acquired from distributional semantics (such as word2vec), it provides applications with understanding that they would not acquire from distributional semantics alone, nor from narrower resources such as WordNet or DBPedia. We demonstrate this with state-of-the-art results on intrinsic evaluations of word relatedness that translate into improvements on applications of word vectors, including solving SAT-style analogies.) <|cite_end|> and DBpedia <|cite_start|> (Reference: DBpedia: A Nucleus for a Web of Open Data: ) <|cite_end|>, promote the inclusion of such systems in VQA architectures. \citeA{wu2016ask} incorporated DBpedia with a joint embedding approach by retrieving and embedding external knowledge related to the textual and vision features with Doc2Vec, then feeding the knowledge-fused embedding into an LSTM model for interpretation. <|paper_end|> | [
"<|reference_start|> You Only Learn One Representation: Unified Network for Multiple Tasks: People ``understand'' the world via vision, hearing, tactile, and also the past experience. Human experience can be learned through normal learning (we call it explicit knowledge), or subconsciously (we call it implicit knowledge). These experiences learned through normal learning or subconsciously will be encoded and stored in the brain. Using these abundant experience as a huge database, human beings can effectively process data, even they were unseen beforehand. In this paper, we propose a unified network to encode implicit knowledge and explicit knowledge together, just like the human brain can learn knowledge from normal learning as well as subconsciousness learning. The unified network can generate a unified representation to simultaneously serve various tasks. We can perform kernel space alignment, prediction refinement, and multi-task learning in a convolutional neural network. The results demonstrate that when implicit knowledge is introduced into the neural network, it benefits the performance of all tasks. We further analyze the implicit representation learnt from the proposed unified network, and it shows great capability on catching the physical meaning of different tasks. The source code of this work is at : https://github.com/WongKinYiu/yolor. <|reference_end|>",
"<|reference_start|> RoBERTa: A Robustly Optimized BERT Pretraining Approach: Language model pretraining has led to significant performance gains but careful comparison between different approaches is challenging. Training is computationally expensive, often done on private datasets of different sizes, and, as we will show, hyperparameter choices have significant impact on the final results. We present a replication study of BERT pretraining (Devlin et al., 2019) that carefully measures the impact of many key hyperparameters and training data size. We find that BERT was significantly undertrained, and can match or exceed the performance of every model published after it. Our best model achieves state-of-the-art results on GLUE, RACE and SQuAD. These results highlight the importance of previously overlooked design choices, and raise questions about the source of recently reported improvements. We release our models and code. <|reference_end|>",
"<|reference_start|> Microsoft COCO: Common Objects in Context: We present a new dataset with the goal of advancing the state-of-the-art in object recognition by placing the question of object recognition in the context of the broader question of scene understanding. This is achieved by gathering images of complex everyday scenes containing common objects in their natural context. Objects are labeled using per-instance segmentations to aid in precise object localization. Our dataset contains photos of 91 objects types that would be easily recognizable by a 4 year old. With a total of 2.5 million labeled instances in 328k images, the creation of our dataset drew upon extensive crowd worker involvement via novel user interfaces for category detection, instance spotting and instance segmentation. We present a detailed statistical analysis of the dataset in comparison to PASCAL, ImageNet, and SUN. Finally, we provide baseline performance analysis for bounding box and segmentation detection results using a Deformable Parts Model. <|reference_end|>",
"<|reference_start|> Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering: Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in models that ignore visual information, leading to an inflated sense of their capability. We propose to counter these language priors for the task of Visual Question Answering (VQA) and make vision (the V in VQA) matter! Specifically, we balance the popular VQA dataset by collecting complementary images such that every question in our balanced dataset is associated with not just a single image, but rather a pair of similar images that result in two different answers to the question. Our dataset is by construction more balanced than the original VQA dataset and has approximately twice the number of image-question pairs. Our complete balanced dataset is publicly available at www.visualqa.org as part of the 2nd iteration of the Visual Question Answering Dataset and Challenge (VQA v2.0). We further benchmark a number of state-of-art VQA models on our balanced dataset. All models perform significantly worse on our balanced dataset, suggesting that these models have indeed learned to exploit language priors. This finding provides the first concrete empirical evidence for what seems to be a qualitative sense among practitioners. Finally, our data collection protocol for identifying complementary images enables us to develop a novel interpretable model, which in addition to providing an answer to the given (image, question) pair, also provides a counter-example based explanation. Specifically, it identifies an image that is similar to the original image, but it believes has a different answer to the same question. This can help in building trust for machines among their users. <|reference_end|>"
] | [
1,
2,
3,
5
] | {"<|cite_1|>": "arxiv-77148", "<|cite_2|>": "arxiv-339909", "<|cite_3|>": "arxiv-216284", "<|cite_4|>": "arxiv-60292", "<|cite_5|>": "arxiv-77148", "<|cite_6|>": "arxiv-111676", "<|cite_7|>": "arxiv-65675", "<|cite_8|>": "arxiv-88870", "<|multi_cite_9_1|>": "arxiv-121985", "<|multi_cite_9_2|>": "arxiv-76548", "<|multi_cite_10_1|>": "arxiv-259146", "<|multi_cite_10_2|>": "arxiv-378896", "<|cite_11|>": "arxiv-87334", "<|cite_12|>": "ss-1250552", "<|cite_13|>": "ss-847699"} |
1201.3194 | <|paper_start|> Title: A Perfect Model for Bounded Verification
Abstract: A Perfect Model for Bounded Verification: A class of languages C is perfect if it is closed under Boolean operations and the emptiness problem is decidable. Perfect language classes are the basis for the automata-theoretic approach to model checking: a system is correct if the language generated by the system is disjoint from the language of bad traces. Regular languages are perfect, but because the disjointness problem for CFLs is undecidable, no class containing the CFLs can be perfect. In practice, verification problems for language classes that are not perfect are often under-approximated by checking if the property holds for all behaviors of the system belonging to a fixed subset. A general way to specify a subset of behaviors is by using bounded languages (languages of the form w1* ... wk* for fixed words w1,...,wk). A class of languages C is perfect modulo bounded languages if it is closed under Boolean operations relative to every bounded language, and if the emptiness problem is decidable relative to every bounded language. We consider finding perfect classes of languages modulo bounded languages. We show that the class of languages accepted by multi-head pushdown automata are perfect modulo bounded languages, and characterize the complexities of decision problems. We also show that bounded languages form a maximal class for which perfection is obtained. We show that computations of several known models of systems, such as recursive multi-threaded programs, recursive counter machines, and communicating finite-state machines can be encoded as multi-head pushdown automata, giving uniform and optimal underapproximation algorithms modulo bounded languages.
Introduction
The automata-theoretic approach to model checking linear-time
properties formalizes the verification problem as a language-theoretic problem about
two automata: the {\em system automaton}, which recognizes the set of executions of the system,
and the {\em property automaton}, which recognizes either the sequences of actions
satisfying the property ({\em positive} specification), or those violating it ({\em negative} specification). Given a system automaton $S$ and a property automaton $P$, verification of positive and negative
specifications reduces to checking $L(S) \subseteq L(P)$ (inclusion problem), or to checking $L(S) \cap L(P) = \emptyset$ (disjointness problem) , respectively.
Language classes effectively closed under boolean operations
and with a decidable emptiness problem are particularly interesting for the automata-theoretic approach.
For such classes not only the inclusion and disjointness problems are decidable, they also have many further
advantages. For example, in these classes systems are closed under parallel composition by rendez-vous, properties are closed under boolean operations, and systems can be seen as properties, or vice versa,
with many useful consequences for compositional and assume-guarantee verification
techniques. For all these reasons, we call these classes {\em perfect}.
The regular languages are perfect but, since because the disjointness problem for
the context-free languages (CFL) is undecidable (see <|cite_start|> (Reference: Introduction to Automata Theory, Languages and Computation: ) <|cite_end|>), no class containing CFL can be perfect. This ``context-free barrier'' restricts the search for perfect
classes to those properly contained in CFL or incomparable with them, and both possibilities have been investigated. In a seminal paper <|cite_start|> (Reference: Adding nesting structure to Words: We propose the model of nested words for representation of data with both a linear ordering and a hierarchically nested matching of items. Examples of data with such dual linear-hierarchical structure include executions of structured programs, annotated linguistic data, and HTML/XML documents. Nested words generalize both words and ordered trees, and allow both word and tree operations. We define nested word automata—finite-state acceptors for nested words, and show that the resulting class of regular languages of nested words has all the appealing theoretical properties that the classical regular word languages enjoys: deterministic nested word automata are as expressive as their nondeterministic counterparts; the class is closed under union, intersection, complementation, concatenation, Kleene-*, prefixes, and language homomorphisms; membership, emptiness, language inclusion, and language equivalence are all decidable; and definability in monadic second order logic corresponds exactly to finite-state recognizability. We also consider regular languages of infinite nested words and show that the closure properties, MSO-characterization, and decidability of decision problems carry over.
The linear encodings of nested words give the class of visibly pushdown languages of words, and this class lies between balanced languages and deterministic context-free languages. We argue that for algorithmic verification of structured programs, instead of viewing the program as a context-free language over words, one should view it as a regular language of nested words (or equivalently, a visibly pushdown language), and this would allow model checking of many properties (such as stack inspection, pre-post conditions) that are not expressible in existing specification logics.
We also study the relationship between ordered trees and nested words, and the corresponding automata: while the analysis complexity of nested word automata is the same as that of classical tree automata, they combine both bottom-up and top-down traversals, and enjoy expressiveness and succinctness benefits over tree automata.) <|cite_end|>,
Alur and Madhusudan proved that the visibly pushdown languages----a subclass of CFL---are perfect,
a result that lead to a very successful theory and efficient algorithms (see e.g. <|cite_start|> (Reference: Visibly Pushdown Games: ) <|cite_end|> <|cite_start|> (Reference: Adding nesting structure to Words: We propose the model of nested words for representation of data with both a linear ordering and a hierarchically nested matching of items. Examples of data with such dual linear-hierarchical structure include executions of structured programs, annotated linguistic data, and HTML/XML documents. Nested words generalize both words and ordered trees, and allow both word and tree operations. We define nested word automata—finite-state acceptors for nested words, and show that the resulting class of regular languages of nested words has all the appealing theoretical properties that the classical regular word languages enjoys: deterministic nested word automata are as expressive as their nondeterministic counterparts; the class is closed under union, intersection, complementation, concatenation, Kleene-*, prefixes, and language homomorphisms; membership, emptiness, language inclusion, and language equivalence are all decidable; and definability in monadic second order logic corresponds exactly to finite-state recognizability. We also consider regular languages of infinite nested words and show that the closure properties, MSO-characterization, and decidability of decision problems carry over.
The linear encodings of nested words give the class of visibly pushdown languages of words, and this class lies between balanced languages and deterministic context-free languages. We argue that for algorithmic verification of structured programs, instead of viewing the program as a context-free language over words, one should view it as a regular language of nested words (or equivalently, a visibly pushdown language), and this would allow model checking of many properties (such as stack inspection, pre-post conditions) that are not expressible in existing specification logics.
We also study the relationship between ordered trees and nested words, and the corresponding automata: while the analysis complexity of nested word automata is the same as that of classical tree automata, they combine both bottom-up and top-down traversals, and enjoy expressiveness and succinctness benefits over tree automata.) <|cite_end|>). Later La Torre, Madhusudan, and Parlato discovered a perfect class incomparable with CFL: the languages recognized by multi-stack visibly pushdown
automata whose computations can be split into a fixed number of stages during which at most one stack is popped <|cite_start|> (Reference: A Robust Class of Context-Sensitive Languages: We define a new class of languages defined by multi-stack automata that forms a robust subclass of context-sensitive languages, with decidable emptiness and closure under boolean operations. This class, called multi-stack visibly pushdown languages (MVPLs), is defined using multi-stack pushdown automata with two restrictions: (a) the pushdown automaton is visible, i.e. the input letter determines the operation on the stacks, and (b) any computation of the machine can be split into k stages, where in each stage, there is at most one stack that is popped. MVPLs are an extension of visibly pushdown languages that captures noncontext free behaviors, and has applications in analyzing abstractions of multithreaded recursive programs, signifi- cantly enlarging the search space that can be explored for them. We show that MVPLs are closed under boolean operations, and problems such as emptiness and inclusion are decidable. We characterize MVPLs using monadic second-order logic over appropriate structures, and exhibit a Parikh theorem for them.) <|cite_end|>.
The ``context-free barrier'' continues to be a serious obstacle in many applications, in particular in the verification
of concurrent systems. For this reason, many tools only check a subset of the executions of the system.
Intuitively, they direct a {\em spotlight} to a region of the possible executions, and check whether the
executions {\em under the spotlight} satisfy the property. The spotlight is controlled by the user,
who can freely move it around to check different regions, and conventional verification corresponds to a
spotlight that illuminates all the space of possible executions. In particular, the ``spotlight principle''
is applied by bounded model-checkers, which unroll program loops and recursion up to a
fixed depth (often after taking the product of the program with an automaton
for the property to be checked), leaving a system
whose executions have a fixed bounded length
(see e.g. <|cite_start|> (Reference: Bounded Model Checking Using Satisfiability Solving: ) <|cite_end|> <|cite_start|> (Reference: Behavioral consistency of C and verilog programs using bounded model checking: We present an algorithm that checks behavioral consistency between an ANSI-C program and a circuit given in Verilog using Bounded Model Checking. Both the circuit and the program are unwound and translated into a formula that represents behavioral consistency. The formula is then checked using a SAT solver. We are able to translate C programs that include side effects, pointers, dynamic memory allocation, and loops with conditions that cannot be evaluated statically. We describe experimental results on various reactive circuits and programs, including a small processor given in Verilog and its Instruction Set Architecture given in ANSI-C.) <|cite_end|>).
It is also used by {\em context-bounded} checkers for multi-threaded programs <|cite_start|> (Reference: Context-Bounded Model Checking of Concurrent Software: ) <|cite_end|> <|cite_start|> (Reference: The Case for Context-Bounded Verification of Concurrent Programs: ) <|cite_end|> <|cite_start|> (Reference: Context-Bounded Analysis For Concurrent Programs With Dynamic Creation of Threads: Context-bounded analysis has been shown to be both efficient and effective at finding bugs in concurrent programs. According to its original definition, context-bounded analysis explores all behaviors of a concurrent program up to some fixed number of context switches between threads. This definition is inadequate for programs that create threads dynamically because bounding the number of context switches in a computation also bounds the number of threads involved in the computation. In this paper, we propose a more general definition of context-bounded analysis useful for programs with dynamic thread creation. The idea is to bound the number of context switches for each thread instead of bounding the number of switches of all threads. We consider several variants based on this new definition, and we establish decidability and complexity results for the analysis induced by them.) <|cite_end|>, which only examine
executions containing at most a fixed number of context-switches (communication events between threads).
Context-bounded checkers break the context-free barrier, but at the price of only exploring finite action sequences.\footnote{
More precisely, in automata-theoretic terms context-bounded checkers explore runs of $S$ of arbitrary length, but containing only a fixed number of
non-$\varepsilon$ transitions.}
Recently, building on ideas by Kahlon
on bounded languages <|cite_start|> (Reference: The mathematical theory of context free languages: ) <|cite_end|>,
context-bounded checking has been extended to {\em bounded verification} <|cite_start|> (Reference: Complexity of pattern-based verification for multithreaded programs: Pattern-based verification checks the correctness of the program executions that follow a given pattern, a regular expression over the alphabet of program transitions of the form w1* ... wn*. For multithreaded programs, the alphabet of the pattern is given by the synchronization operations between threads. We study the complexity of pattern-based verification for abstracted multithreaded programs in which, as usual in program analysis, conditions have been replaced by nondeterminism (the technique works also for boolean programs). While unrestricted verification is undecidable for abstracted multithreaded programs with recursive procedures and PSPACE-complete for abstracted multithreaded while-programs, we show that pattern-based verification is NP-complete for both classes. We then conduct a multiparameter analysis in which we study the complexity in the number of threads, the number of procedures per thread, the size of the procedures, and the size of the pattern. We first show that no algorithm for pattern-based verification can be polynomial in the number of threads, procedures per thread, or the size of the pattern (unless P=NP). Then, using recent results about Parikh images of regular languages and semilinear sets, we present an algorithm exponential in the number of threads, procedures per thread, and size of the pattern, but polynomial in the size of the procedures.) <|cite_end|>\footnote{In <|cite_start|> (Reference: Complexity of pattern-based verification for multithreaded programs: Pattern-based verification checks the correctness of the program executions that follow a given pattern, a regular expression over the alphabet of program transitions of the form w1* ... wn*. For multithreaded programs, the alphabet of the pattern is given by the synchronization operations between threads. We study the complexity of pattern-based verification for abstracted multithreaded programs in which, as usual in program analysis, conditions have been replaced by nondeterminism (the technique works also for boolean programs). While unrestricted verification is undecidable for abstracted multithreaded programs with recursive procedures and PSPACE-complete for abstracted multithreaded while-programs, we show that pattern-based verification is NP-complete for both classes. We then conduct a multiparameter analysis in which we study the complexity in the number of threads, the number of procedures per thread, the size of the procedures, and the size of the pattern. We first show that no algorithm for pattern-based verification can be polynomial in the number of threads, procedures per thread, or the size of the pattern (unless P=NP). Then, using recent results about Parikh images of regular languages and semilinear sets, we present an algorithm exponential in the number of threads, procedures per thread, and size of the pattern, but polynomial in the size of the procedures.) <|cite_end|> bounded verification was called
pattern-based verification, but, since pattern is a rather generic term, we
opt for bounded verification here.},
which checks whether executions of the system of the form $w_1^* \ldots w_n^*$ for some
finite words $w_1, \ldots, w_n$ satisfy a property.
In automata-theoretic terms, the spotlight principle corresponds to {\em verification modulo a language}.
The inclusion check $L(S) \subseteq L(P)$ and the disjointness check $L(S) \cap L(P) = \emptyset$ are replaced by
checks $L_M(S) \subseteq L_M(P)$ and $L_M(S) \cap L_M(P) = \emptyset$, respectively, where $L_M$ denotes $L \cap M$.
Context-bounded checking corresponds to verification modulo the language of all words up to fixed length, and bounded verification to verification modulo a bounded expression.
Verification modulo a language $M$ allows to break the context-free barrier, which raises the question of
identifying perfect classes {\em modulo language classes}.
Given a boolean operation ${\it Op}(L_1, \ldots, L_n)$ on languages,
let us define the same operation modulo a language $M$ by ${\it Op}_M(L_1, \ldots, L_n) = {\it Op}(L_1\cap M, \ldots, L_n \cap M)$,
and, similarly, let us say that an automaton $A$ is empty modulo $M$ if $L(A) \cap M = \emptyset$.
Let $\mathcal{L}$ and $\mathcal{C}$ be classes of languages.
We call $\mathcal{L}$ {\em perfect modulo $\mathcal{C}$} if it is closed under Boolean operations modulo any $M\in\mathcal{C}$,
and has a decidable emptiness problem modulo any $M\in \mathcal{C}$.
It is easy to see that the recursive languages are perfect modulo the finite languages.
But for bounded expressions the question becomes harder.
The disjointness problem modulo a bounded expression is decidable for CFL <|cite_start|> (Reference: The mathematical theory of context free languages: ) <|cite_end|>,
which hints at a perfect class modulo bounded expressions containing CFL.
However, CFL itself is not perfect modulo bounded expressions, because it is not closed under intersection:
there is no CFL $L$ such that $\set{a^nb^nc^*\mid n\geq 0} \cap \set{a^*b^nc^n\mid n\geq 0}\cap a^*b^*c^* = L\cap a^*b^*c^*$.
In this paper we present the first perfect class modulo bounded expressions: the
languages recognized by {\em multihead pushdown automata} (MHPDA).
This result is very satisfactory, because the class has a simple and purely syntactic definition,
and as we demonstrate, is expressive enough to capture many well-known models.
We also characterize the complexity of the Booleans operations and the emptiness check modulo bounded expressions:
we show that the emptiness check is coNEXPTIME-complete, union and intersection are polynomial,
and complementation is at most triply exponential.
Surprisingly, the emptiness problem is coNP-complete (and complementation doubly
exponential)
for the subclass of {\em letter-bounded} expressions, in which
each string $w_1,\ldots, w_n$ is a single letter.
We also show that bounded expressions are a maximal class of regular languages for which perfection
can be attained for MHPDAs, any additional language leads to undecidability of emptiness.
In the second part of the paper, we show that central automata models of
software can be encoded into MHPDA. Encoding recursive multithreaded programs
to MHPDA is obvious, since the intersection of CFLs is MHPDA-definable, and we
subsume the results of Esparza and Ganty <|cite_start|> (Reference: Complexity of pattern-based verification for multithreaded programs: Pattern-based verification checks the correctness of the program executions that follow a given pattern, a regular expression over the alphabet of program transitions of the form w1* ... wn*. For multithreaded programs, the alphabet of the pattern is given by the synchronization operations between threads. We study the complexity of pattern-based verification for abstracted multithreaded programs in which, as usual in program analysis, conditions have been replaced by nondeterminism (the technique works also for boolean programs). While unrestricted verification is undecidable for abstracted multithreaded programs with recursive procedures and PSPACE-complete for abstracted multithreaded while-programs, we show that pattern-based verification is NP-complete for both classes. We then conduct a multiparameter analysis in which we study the complexity in the number of threads, the number of procedures per thread, the size of the procedures, and the size of the pattern. We first show that no algorithm for pattern-based verification can be polynomial in the number of threads, procedures per thread, or the size of the pattern (unless P=NP). Then, using recent results about Parikh images of regular languages and semilinear sets, we present an algorithm exponential in the number of threads, procedures per thread, and size of the pattern, but polynomial in the size of the procedures.) <|cite_end|>. Additionally, we supply
encodings for recursive counter machines (\(\kcm\)), the main
automata-theoretic model of procedural programs with integer variables, and for
finite-state machines communicating through unbounded perfect FIFO channels
(\(\cfsm\)), the most popular model for the verification of communication
protocols. While the existence of some encoding is not surprising, since
emptiness problems for \(\kcm\), \(\cfsm\), and MHPDA are all undecidable, our
encodings exhibit only a small polynomial blowup, and, perhaps more
importantly, preserve bounded behaviours. More precisely, using our encodings
we reduce {\em bounded control-state reachability} for \(\kcm\) and
\(\cfsm\)---deciding reachability of a given control state by means of a
computation conforming to a bounded expression---to non emptiness of MHPDA
modulo bounded expression. As a consequence, we prove that bounded
control-state reachability for both \(\kcm\) and \(\cfsm\) are NP-complete. The
NP-completeness also extends to {\em unrestricted} control-state reachability
for {\em flat} \(\kcm\) and \emph{flat} \(\cfsm\), because by construction
their computations conform to a bounded expression. (See e.g. <|cite_start|> (Reference: Flat Counter Automata Almost Everywhere!: ) <|cite_end|> and <|cite_start|> (Reference: Symbolic Reachability Analysis of FIFO-Channel Systems with Nonregular Sets of Configurations: ) <|cite_end|> for a study of those models).
More generally, our language-based approach provides a uniform framework for
the verification of models using auxiliary storage like counters, queues or a
mix of both as defined in <|cite_start|> (Reference: Composition of Accelerations to Verify Infinite Heterogeneous Systems: ) <|cite_end|>. Incidentally, our framework allows
to uniformly derive optimal complexity upper bounds for models manipulating
counters, queues or both, and shared memory multithreaded programs.
\smallskip\noindent{\em Related work.} Multi-tape and multi-head finite-state
and pushdown machines were extensively studied in the 1960's and 1970's, e.g. <|cite_start|> (Reference: A Note on Semilinear Sets and Bounded-Reversal Multihead Pushdown Automata: ) <|cite_end|>.
The decidability of emptiness for MHPDA modulo bounded languages
was proved by Ibarra in <|cite_start|> (Reference: A Note on Semilinear Sets and Bounded-Reversal Multihead Pushdown Automata: ) <|cite_end|>, using previous results going back to his
(hard to find) PhD thesis. Our proof settles the complexity
of the problem (coNEXPTIME-complete). Additionally, our constructions show
the surprising coNP-completeness result for letter-bounded expressions.
(A similar coNP-completeness result was recently
obtained in <|cite_start|> (Reference: Model Checking Recursive Programs with Numeric Data Types: ) <|cite_end|>, but for a different model.)
Reversal bounded counter machine as bounded language acceptors (see e.g. <|cite_start|> (Reference: Reversal-Bounded Multicounter Machines and Their Decision Problems: Decidable and undecldable properties of various classes of two-way multlcounter machines (deterministic, nondetermmlstlc, multttape, pushdown store augmented) with reversal-bounded input and/or counters are investigated In particular It IS shown that the emptiness, infiniteness, dlsjointness, containment, universe, and equivalence problems are decidable for the class of deterministic two-way multlcounter machines whose input and counters are reversal-bounded) <|cite_end|>) and Bounded Parikh
automata <|cite_start|> (Reference: Bounded Parikh Automata: The Parikh finite word automaton model (PA) was introduced and studied by Klaedtke and Ruess in 2003. Here, by means of related models, it is shown that the bounded languages recognized by PA are the same as those recognized by deterministic PA. Moreover, this class of languages is the class of bounded languages whose set of iterations is semilinear.) <|cite_end|> have the same expressive power as MHPDA modulo
bounded expressions (they all recognize the languages of the form $\{w_1^{k_1}
\ldots w_n^{k_n} \mid (k_1, \ldots, k_n) \in S\}$ for some semilinear set $S$).
These three characterizations of the same class complement each other. While
MHPDAs have the modelling advantage of allowing to directly encode recursive
procedures, queues and counters, reversal bounded counter machine (and by extension
flat counter machine) have very good
algorithmic methods and tool support (see e.g. <|cite_start|> (Reference: Model Checking Recursive Programs with Numeric Data Types: ) <|cite_end|> <|cite_start|> (Reference: FAST: acceleration from theory to practice: ) <|cite_end|>). Our results allow to apply these
algorithms and tools to a larger range of problems. <|paper_end|> | [
"<|reference_start|> A Robust Class of Context-Sensitive Languages: We define a new class of languages defined by multi-stack automata that forms a robust subclass of context-sensitive languages, with decidable emptiness and closure under boolean operations. This class, called multi-stack visibly pushdown languages (MVPLs), is defined using multi-stack pushdown automata with two restrictions: (a) the pushdown automaton is visible, i.e. the input letter determines the operation on the stacks, and (b) any computation of the machine can be split into k stages, where in each stage, there is at most one stack that is popped. MVPLs are an extension of visibly pushdown languages that captures noncontext free behaviors, and has applications in analyzing abstractions of multithreaded recursive programs, signifi- cantly enlarging the search space that can be explored for them. We show that MVPLs are closed under boolean operations, and problems such as emptiness and inclusion are decidable. We characterize MVPLs using monadic second-order logic over appropriate structures, and exhibit a Parikh theorem for them. <|reference_end|>",
"<|reference_start|> Context-Bounded Analysis For Concurrent Programs With Dynamic Creation of Threads: Context-bounded analysis has been shown to be both efficient and effective at finding bugs in concurrent programs. According to its original definition, context-bounded analysis explores all behaviors of a concurrent program up to some fixed number of context switches between threads. This definition is inadequate for programs that create threads dynamically because bounding the number of context switches in a computation also bounds the number of threads involved in the computation. In this paper, we propose a more general definition of context-bounded analysis useful for programs with dynamic thread creation. The idea is to bound the number of context switches for each thread instead of bounding the number of switches of all threads. We consider several variants based on this new definition, and we establish decidability and complexity results for the analysis induced by them. <|reference_end|>",
"<|reference_start|> The mathematical theory of context free languages: <|reference_end|>",
"<|reference_start|> A Note on Semilinear Sets and Bounded-Reversal Multihead Pushdown Automata: <|reference_end|>"
] | [
4,
9,
10,
19
] | {"<|cite_1|>": "ss-916560", "<|cite_2|>": "ss-1277289", "<|multi_cite_3_1|>": "ss-1096322", "<|multi_cite_3_2|>": "ss-1277289", "<|cite_4|>": "ss-678501", "<|multi_cite_5_1|>": "ss-695487", "<|multi_cite_5_2|>": "ss-1155769", "<|multi_cite_6_1|>": "ss-678500", "<|multi_cite_6_2|>": "ss-1011059", "<|multi_cite_6_3|>": "arxiv-25966", "<|cite_8|>": "ss-1969413", "<|cite_9|>": "ss-1695958", "<|cite_10|>": "ss-1695958", "<|cite_11|>": "ss-1969413", "<|cite_12|>": "ss-1695958", "<|cite_13|>": "ss-1009725", "<|cite_14|>": "ss-1017482", "<|cite_15|>": "ss-1816275", "<|multi_cite_16_2|>": "ss-1697799", "<|cite_17|>": "ss-1697799", "<|cite_19|>": "ss-1011066", "<|cite_20|>": "ss-1852343", "<|cite_21|>": "arxiv-23844", "<|cite_22|>": "ss-1011066", "<|cite_23|>": "ss-1695395"} |
2401.17904-1 | <|cite_start|> (Reference: Page Segmentation using a Convolutional Neural Network with Trainable
Co-Occurrence Features: In document analysis, page segmentation is a fundamental task that divides a document image into semantic regions. In addition to local features, such as pixel-wise information, co-occurrence features are also useful for extracting texture-like periodic information for accurate segmentation. However, existing convolutional neural network (CNN)-based methods do not have any mechanisms that explicitly extract co-occurrence features. In this paper, we propose a method for page segmentation using a CNN with trainable multiplication layers (TMLs). The TML is specialized for extracting co-occurrences from feature maps, thereby supporting the detection of objects with similar textures and periodicities. This property is also considered to be effective for document image analysis because of regularity in text line structures, tables, etc. In the experiment, we achieved promising performance on a pixel-wise page segmentation task by combining TMLs with U-Net. The results demonstrate that TMLs can improve performance compared to the original U-Net. The results also demonstrate that TMLs are helpful for detecting regions with periodically repeating features, such as tables and main text.) <|cite_end|>objects. Language model-based approaches <|cite_start|> (Reference: LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking: Self-supervised pre-training techniques have achieved remarkable progress in Document AI. Most multimodal pre-trained models use a masked language modeling objective to learn bidirectional representations on the text modality, but they differ in pre-training objectives for the image modality. This discrepancy adds difficulty to multimodal representation learning. In this paper, we propose \textbf{LayoutLMv3} to pre-train multimodal Transformers for Document AI with unified text and image masking. Additionally, LayoutLMv3 is pre-trained with a word-patch alignment objective to learn cross-modal alignment by predicting whether the corresponding image patch of a text word is masked. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 achieves state-of-the-art performance not only in text-centric tasks, including form understanding, receipt understanding, and document visual question answering, but also in image-centric tasks such as document image classification and document layout analysis. The code and models are publicly available at \url{https://aka.ms/layoutlmv3}.) <|cite_end|> <|cite_start|> (Reference: StructuralLM: Structural Pre-training for Form Understanding: Large pre-trained language models achieve state-of-the-art results when fine-tuned on downstream NLP tasks. However, they almost exclusively focus on text-only representation, while neglecting cell-level layout information that is important for form image understanding. In this paper, we propose a new pre-training approach, StructuralLM, to jointly leverage cell and layout information from scanned documents. Specifically, we pre-train StructuralLM with two new designs to make the most of the interactions of cell and layout information: 1) each cell as a semantic unit; 2) classification of cell positions. The pre-trained StructuralLM achieves new state-of-the-art results in different types of downstream tasks, including form understanding (from 78.95 to 85.14), document visual question answering (from 72.59 to 83.94) and document image classification (from 94.43 to 96.08).) <|cite_end|>, leverage Optical Character Recognition (OCR) tokens to group words into segments for semantic parsing. recently, Long \etal <|cite_start|> (Reference: Towards End-to-End Unified Scene Text Detection and Layout Analysis: Scene text detection and document layout analysis have long been treated as two separate tasks in different image domains. In this paper, we bring them together and introduce the task of unified scene text detection and layout analysis. The first hierarchical scene text dataset is introduced to enable this novel research task. We also propose a novel method that is able to simultaneously detect scene text and form text clusters in a unified way. Comprehensive experiments show that our unified model achieves better performance than multiple well-designed baseline methods. Additionally, this model achieves state-of-the-art results on multiple scene text detection datasets without the need of complex post-processing. Dataset and code: https://github.com/google-research-datasets/hiertext and https://github.com/tensorflow/models/tree/master/official/projects/unified_detector.) <|cite_end|>introduced a Unified Detector (UD) that addresses joint text detection and layout analysis in both natural and document scenarios. In UD, each object query is trained to segment words within a text line, and a layout branch generates an affinity matrix between different object queries for clustering into paragraphs. However, UD falls short in providing stroke masks, complete text-line, and paragraph masks.
\subsection{Adapting Vision Foundation Model for Text Tasks}
Recent studies <|cite_start|> (Reference: Vision-Language Pre-Training for Boosting Scene Text Detectors: Recently, vision-language joint representation learning has proven to be highly effective in various scenarios. In this paper, we specifically adapt vision-language joint learning for scene text detection, a task that intrinsically involves cross-modal interaction between the two modalities: vision and language, since text is the written form of language. Concretely, we propose to learn contextualized, joint representations through vision-language pre-training, for the sake of enhancing the performance of scene text detectors. Towards this end, we devise a pre-training architecture with an image encoder, a text encoder and a cross-modal encoder, as well as three pretext tasks: image-text contrastive learning (ITC), masked language modeling (MLM) and word-in-image prediction (WIP). The pre-trained model is able to produce more informative representations with richer semantics, which could readily benefit existing scene text detectors (such as EAST and PSENet) in the down-stream text detection task. Extensive experiments on standard benchmarks demonstrate that the proposed paradigm can significantly improve the performance of various representative text detectors, outperforming previous pre-training approaches. The code and pre-trained models will be publicly released.) <|cite_end|> <|cite_start|> (Reference: Language Matters: A Weakly Supervised Vision-Language Pre-training Approach for Scene Text Detection and Spotting: Recently, Vision-Language Pre-training (VLP) techniques have greatly benefited various vision-language tasks by jointly learning visual and textual representations, which intuitively helps in Optical Character Recognition (OCR) tasks due to the rich visual and textual information in scene text images. However, these methods cannot well cope with OCR tasks because of the difficulty in both instance-level text encoding and image-text pair acquisition (i.e. images and captured texts in them). This paper presents a weakly supervised pre-training method, oCLIP, which can acquire effective scene text representations by jointly learning and aligning visual and textual information. Our network consists of an image encoder and a character-aware text encoder that extract visual and textual features, respectively, as well as a visual-textual decoder that models the interaction among textual and visual features for learning effective scene text representations. With the learning of textual features, the pre-trained model can attend texts in images well with character awareness. Besides, these designs enable the learning from weakly annotated texts (i.e. partial texts in images without text bounding boxes) which mitigates the data annotation constraint greatly. Experiments over the weakly annotated images in ICDAR2019-LSVT show that our pre-trained model improves F-score by +2.5\% and +4.8\% while transferring its weights to other text detection and spotting networks, respectively. In addition, the proposed method outperforms existing pre-training techniques consistently across multiple public datasets (e.g., +3.2\% and +1.3\% for Total-Text and CTW1500).) <|cite_end|> <|cite_start|> (Reference: Turning a CLIP Model into a Scene Text Detector: The recent large-scale Contrastive Language-Image Pretraining (CLIP) model has shown great potential in various downstream tasks via leveraging the pretrained vision and language knowledge. Scene text, which contains rich textual and visual information, has an inherent connection with a model like CLIP. Recently, pretraining approaches based on vision language models have made effective progresses in the field of text detection. In contrast to these works, this paper proposes a new method, termed TCM, focusing on Turning the CLIP Model directly for text detection without pretraining process. We demonstrate the advantages of the proposed TCM as follows: (1) The underlying principle of our framework can be applied to improve existing scene text detector. (2) It facilitates the few-shot training capability of existing methods, e.g., by using 10% of labeled data, we significantly improve the performance of the baseline method with an average of 22% in terms of the F-measure on 4 benchmarks. (3) By turning the CLIP model into existing scene text detection methods, we further achieve promising domain adaptation ability. The code will be publicly released at https://github.com/wenwenyu/TCM.) <|cite_end|> <|cite_start|> (Reference: Turning a CLIP Model into a Scene Text Spotter: We exploit the potential of the large-scale Contrastive Language-Image Pretraining (CLIP) model to enhance scene text detection and spotting tasks, transforming it into a robust backbone, FastTCM-CR50. This backbone utilizes visual prompt learning and cross-attention in CLIP to extract image and text-based prior knowledge. Using predefined and learnable prompts, FastTCM-CR50 introduces an instance-language matching process to enhance the synergy between image and text embeddings, thereby refining text regions. Our Bimodal Similarity Matching (BSM) module facilitates dynamic language prompt generation, enabling offline computations and improving performance. FastTCM-CR50 offers several advantages: 1) It can enhance existing text detectors and spotters, improving performance by an average of 1.7% and 1.5%, respectively. 2) It outperforms the previous TCM-CR50 backbone, yielding an average improvement of 0.2% and 0.56% in text detection and spotting tasks, along with a 48.5% increase in inference speed. 3) It showcases robust few-shot training capabilities. Utilizing only 10% of the supervised data, FastTCM-CR50 improves performance by an average of 26.5% and 5.5% for text detection and spotting tasks, respectively. 4) It consistently enhances performance on out-of-distribution text detection and spotting datasets, particularly the NightTime-ArT subset from ICDAR2019-ArT and the DOTA dataset for oriented object detection. The code is available at https://github.com/wenwenyu/TCM.) <|cite_end|>have delved into leveraging CLIP <|cite_start|> (Reference: Learning Transferable Visual Models From Natural Language Supervision: State-of-the-art computer vision systems are trained to predict a fixed set of predetermined object categories. This restricted form of supervision limits their generality and usability since additional labeled data is needed to specify any other visual concept. Learning directly from raw text about images is a promising alternative which leverages a much broader source of supervision. We demonstrate that the simple pre-training task of predicting which caption goes with which image is an efficient and scalable way to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet. After pre-training, natural language is used to reference learned visual concepts (or describe new ones) enabling zero-shot transfer of the model to downstream tasks. We study the performance of this approach by benchmarking on over 30 different existing computer vision datasets, spanning tasks such as OCR, action recognition in videos, geo-localization, and many types of fine-grained object classification. The model transfers non-trivially to most tasks and is often competitive with a fully supervised baseline without the need for any dataset specific training. For instance, we match the accuracy of the original ResNet-50 on ImageNet zero-shot without needing to use any of the 1.28 million training examples it was trained on. We release our code and pre-trained model weights at https://github.com/OpenAI/CLIP.) <|cite_end|>to improve backbone representations for scene text detection and spotting. Specifically, oCLIP <|cite_start|> (Reference: Language Matters: A Weakly Supervised Vision-Language Pre-training Approach for Scene Text Detection and Spotting: Recently, Vision-Language Pre-training (VLP) techniques have greatly benefited various vision-language tasks by jointly learning visual and textual representations, which intuitively helps in Optical Character Recognition (OCR) tasks due to the rich visual and textual information in scene text images. However, these methods cannot well cope with OCR tasks because of the difficulty in both instance-level text encoding and image-text pair acquisition (i.e. images and captured texts in them). This paper presents a weakly supervised pre-training method, oCLIP, which can acquire effective scene text representations by jointly learning and aligning visual and textual information. Our network consists of an image encoder and a character-aware text encoder that extract visual and textual features, respectively, as well as a visual-textual decoder that models the interaction among textual and visual features for learning effective scene text representations. With the learning of textual features, the pre-trained model can attend texts in images well with character awareness. Besides, these designs enable the learning from weakly annotated texts (i.e. partial texts in images without text bounding boxes) which mitigates the data annotation constraint greatly. Experiments over the weakly annotated images in ICDAR2019-LSVT show that our pre-trained model improves F-score by +2.5\% and +4.8\% while transferring its weights to other text detection and spotting networks, respectively. In addition, the proposed method outperforms existing pre-training techniques consistently across multiple public datasets (e.g., +3.2\% and +1.3\% for Total-Text and CTW1500).) <|cite_end|>introduces a weakly supervised pre-trained network aligning visual and partial textual information for scene text detection and spotting. In contrast, TCM <|cite_start|> (Reference: Turning a CLIP Model into a Scene Text Detector: The recent large-scale Contrastive Language-Image Pretraining (CLIP) model has shown great potential in various downstream tasks via leveraging the pretrained vision and language knowledge. Scene text, which contains rich textual and visual information, has an inherent connection with a model like CLIP. Recently, pretraining approaches based on vision language models have made effective progresses in the field of text detection. In contrast to these works, this paper proposes a new method, termed TCM, focusing on Turning the CLIP Model directly for text detection without pretraining process. We demonstrate the advantages of the proposed TCM as follows: (1) The underlying principle of our framework can be applied to improve existing scene text detector. (2) It facilitates the few-shot training capability of existing methods, e.g., by using 10% of labeled data, we significantly improve the performance of the baseline method with an average of 22% in terms of the F-measure on 4 benchmarks. (3) By turning the CLIP model into existing scene text detection methods, we further achieve promising domain adaptation ability. The code will be publicly released at https://github.com/wenwenyu/TCM.) <|cite_end|>adapts the CLIP model for scene text detection through visual prompt tuning. Additionally, for enhanced scene text recognition, CLIP-OCR <|cite_start|> (Reference: Symmetrical Linguistic Feature Distillation with CLIP for Scene Text Recognition: In this paper, we explore the potential of the Contrastive Language-Image Pretraining (CLIP) model in scene text recognition (STR), and establish a novel Symmetrical Linguistic Feature Distillation framework (named CLIP-OCR) to leverage both visual and linguistic knowledge in CLIP. Different from previous CLIP-based methods mainly considering feature generalization on visual encoding, we propose a symmetrical distillation strategy (SDS) that further captures the linguistic knowledge in the CLIP text encoder. By cascading the CLIP image encoder with the reversed CLIP text encoder, a symmetrical structure is built with an image-to-text feature flow that covers not only visual but also linguistic information for distillation.Benefiting from the natural alignment in CLIP, such guidance flow provides a progressive optimization objective from vision to language, which can supervise the STR feature forwarding process layer-by-layer.Besides, a new Linguistic Consistency Loss (LCL) is proposed to enhance the linguistic capability by considering second-order statistics during the optimization. Overall, CLIP-OCR is the first to design a smooth transition between image and text for the STR task.Extensive experiments demonstrate the effectiveness of CLIP-OCR with 93.8% average accuracy on six popular STR benchmarks.Code will be available at https://github.com/wzx99/CLIPOCR.) <|cite_end|>proposes a symmetrical distillation strategy capturing linguistic knowledge in the CLIP text encoder. In our work, we advance this field by developing a hierarchical and promptable segmentation framework using SAM <|cite_start|> (Reference: Segment Anything: We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.) <|cite_end|>.
\subsection{Segment Anything Model and Follow-ups}
SAM <|cite_start|> (Reference: Segment Anything: We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision.) <|cite_end|>is a pioneering vision model for general image segmentation, exhibiting remarkable generalization capabilities through large-scale pre-training. It extends its impact to diverse downstream tasks such as image matting <|cite_start|> (Reference: Matte Anything: Interactive Natural Image Matting with Segment Anything Models: Natural image matting algorithms aim to predict the transparency map (alpha-matte) with the trimap guidance. However, the production of trimap often requires significant labor, which limits the widespread application of matting algorithms on a large scale. To address the issue, we propose Matte Anything (MatAny), an interactive natural image matting model that could produce high-quality alpha-matte with various simple hints. The key insight of MatAny is to generate pseudo trimap automatically with contour and transparency prediction. In our work, we leverage vision foundation models to enhance the performance of natural image matting. Specifically, we use the segment anything model to predict high-quality contour with user interaction and an open-vocabulary detector to predict the transparency of any object. Subsequently, a pre-trained image matting model generates alpha mattes with pseudo trimaps. MatAny is the interactive matting algorithm with the most supported interaction methods and the best performance to date. It consists of orthogonal vision models without any additional training. We evaluate the performance of MatAny against several current image matting algorithms. MatAny has 58.3% improvement on MSE and 40.6% improvement on SAD compared to the previous image matting methods with simple guidance, achieving new state-of-the-art (SOTA) performance. The source codes and pre-trained models are available at https://github.com/hustvl/Matte-Anything.) <|cite_end|>, 3D segmentation <|cite_start|> (Reference: Segment Anything in 3d with Nerfs: .) <|cite_end|>, video tracking <|cite_start|> (Reference: Segment and Track Anything: This report presents a framework called Segment And Track Anything (SAMTrack) that allows users to precisely and effectively segment and track any object in a video. Additionally, SAM-Track employs multimodal interaction methods that enable users to select multiple objects in videos for tracking, corresponding to their specific requirements. These interaction methods comprise click, stroke, and text, each possessing unique benefits and capable of being employed in combination. As a result, SAM-Track can be used across an array of fields, ranging from drone technology, autonomous driving, medical imaging, augmented reality, to biological analysis. SAM-Track amalgamates Segment Anything Model (SAM), an interactive key-frame segmentation model, with our proposed AOT-based tracking model (DeAOT), which secured 1st place in four tracks of the VOT 2022 challenge, to facilitate object tracking in video. In addition, SAM-Track incorporates Grounding-DINO, which enables the framework to support text-based interaction. We have demonstrated the remarkable capabilities of SAM-Track on DAVIS-2016 Val (92.0%), DAVIS-2017 Test (79.2%)and its practicability in diverse applications. The project page is available at: https://github.com/z-x-yang/Segment-and-Track-Anything.) <|cite_end|> <|cite_start|> (Reference: Track Anything: Segment Anything Meets Videos: Recently, the Segment Anything Model (SAM) gains lots of attention rapidly due to its impressive segmentation performance on images. Regarding its strong ability on image segmentation and high interactivity with different prompts, we found that it performs poorly on consistent segmentation in videos. Therefore, in this report, we propose Track Anything Model (TAM), which achieves high-performance interactive tracking and segmentation in videos. To be detailed, given a video sequence, only with very little human participation, i.e., several clicks, people can track anything they are interested in, and get satisfactory results in one-pass inference. Without additional training, such an interactive design performs impressively on video object tracking and segmentation. All resources are available on {https://github.com/gaomingqi/Track-Anything}. We hope this work can facilitate related research.) <|cite_end|>, and medical image segmentation <|cite_start|> (Reference: SurgicalSAM: Efficient Class Promptable Surgical Instrument Segmentation: The Segment Anything Model (SAM) is a powerful foundation model that has revolutionised image segmentation. To apply SAM to surgical instrument segmentation, a common approach is to locate precise points or boxes of instruments and then use them as prompts for SAM in a zero-shot manner. However, we observe two problems with this naive pipeline: (1) the domain gap between natural objects and surgical instruments leads to inferior generalisation of SAM; and (2) SAM relies on precise point or box locations for accurate segmentation, requiring either extensive manual guidance or a well-performing specialist detector for prompt preparation, which leads to a complex multi-stage pipeline. To address these problems, we introduce SurgicalSAM, a novel end-to-end efficient-tuning approach for SAM to effectively integrate surgical-specific information with SAM's pre-trained knowledge for improved generalisation. Specifically, we propose a lightweight prototype-based class prompt encoder for tuning, which directly generates prompt embeddings from class prototypes and eliminates the use of explicit prompts for improved robustness and a simpler pipeline. In addition, to address the low inter-class variance among surgical instrument categories, we propose contrastive prototype learning, further enhancing the discrimination of the class prototypes for more accurate class prompting. The results of extensive experiments on both EndoVis2018 and EndoVis2017 datasets demonstrate that SurgicalSAM achieves state-of-the-art performance while only requiring a small number of tunable parameters. The source code is available at https://github.com/wenxi-yue/SurgicalSAM.) <|cite_end|> <|cite_start|> (Reference: Part to whole: Collaborative prompting for surgical instrument segmentation: Foundation models like the Segment Anything Model (SAM) have demonstrated promise in generic object segmentation. However, directly applying SAM to surgical instrument segmentation presents key challenges. First, SAM relies on per-frame point-or-box prompts which complicate surgeon-computer interaction. Also, SAM yields suboptimal performance on segmenting surgical instruments, owing to insufficient surgical data in its pre-training as well as the complex structure and fine-grained details of various surgical instruments. To address these challenges, in this paper, we investigate text promptable surgical instrument segmentation and propose SP-SAM ( S urgical P art-SAM), a novel efficient-tuning approach that integrates surgical instrument structure knowledge with the generic segmentation knowledge of SAM. Specifically, we achieve this by proposing (1) collaborative prompts in the text form “[part name] of [instrument category name]” that decompose instruments into fine-grained parts; (2) a Cross-Modal Prompt Encoder that encodes text prompts jointly with visual embeddings into discriminative part-level representations; and (3) a Part-to-Whole Selective Fusion and a Hierarchical Decoding strategy that selectively assemble the part-level representations into a whole for accurate instrument segmentation. Built upon them,) <|cite_end|>. For example, SAM faces challenges in medical image segmentation due to domain gaps. Some methods <|cite_start|> (Reference: Medical SAM Adapter: Adapting Segment Anything Model for Medical Image Segmentation: The Segment Anything Model (SAM) has recently gained popularity in the field of image segmentation due to its impressive capabilities in various segmentation tasks and its prompt-based interface. However, recent studies and individual experiments have shown that SAM underperforms in medical image segmentation, since the lack of the medical specific knowledge. This raises the question of how to enhance SAM's segmentation capability for medical images. In this paper, instead of fine-tuning the SAM model, we propose the Medical SAM Adapter (Med-SA), which incorporates domain-specific medical knowledge into the segmentation model using a light yet effective adaptation technique. In Med-SA, we propose Space-Depth Transpose (SD-Trans) to adapt 2D SAM to 3D medical images and Hyper-Prompting Adapter (HyP-Adpt) to achieve prompt-conditioned adaptation. We conduct comprehensive evaluation experiments on 17 medical image segmentation tasks across various image modalities. Med-SA outperforms several state-of-the-art (SOTA) medical image segmentation methods, while updating only 2\% of the parameters. Our code is released at https://github.com/KidsWithTokens/Medical-SAM-Adapter.) <|cite_end|> <|cite_start|> (Reference: Customized Segment Anything Model for Medical Image Segmentation: We propose SAMed, a general solution for medical image segmentation. Different from the previous methods, SAMed is built upon the large-scale image segmentation model, Segment Anything Model (SAM), to explore the new research paradigm of customizing large-scale models for medical image segmentation. SAMed applies the low-rank-based (LoRA) finetuning strategy to the SAM image encoder and finetunes it together with the prompt encoder and the mask decoder on labeled medical image segmentation datasets. We also observe the warmup finetuning strategy and the AdamW optimizer lead SAMed to successful convergence and lower loss. Different from SAM, SAMed could perform semantic segmentation on medical images. Our trained SAMed model achieves 81.88 DSC and 20.64 HD on the Synapse multi-organ segmentation dataset, which is on par with the state-of-the-art methods. We conduct extensive experiments to validate the effectiveness of our design. Since SAMed only updates a small fraction of the SAM parameters, its deployment cost and storage cost are quite marginal in practical usage. The code of SAMed is available at https://github.com/hitachinsk/SAMed.) <|cite_end|> <|cite_start|> (Reference: SAM-Med2D: The Segment Anything Model (SAM) represents a state-of-the-art research advancement in natural image segmentation, achieving impressive results with input prompts such as points and bounding boxes. However, our evaluation and recent research indicate that directly applying the pretrained SAM to medical image segmentation does not yield satisfactory performance. This limitation primarily arises from significant domain gap between natural images and medical images. To bridge this gap, we introduce SAM-Med2D, the most comprehensive studies on applying SAM to medical 2D images. Specifically, we first collect and curate approximately 4.6M images and 19.7M masks from public and private datasets, constructing a large-scale medical image segmentation dataset encompassing various modalities and objects. Then, we comprehensively fine-tune SAM on this dataset and turn it into SAM-Med2D. Unlike previous methods that only adopt bounding box or point prompts as interactive segmentation approach, we adapt SAM to medical image segmentation through more comprehensive prompts involving bounding boxes, points, and masks. We additionally fine-tune the encoder and decoder of the original SAM to obtain a well-performed SAM-Med2D, leading to the most comprehensive fine-tuning strategies to date. Finally, we conducted a comprehensive evaluation and analysis to investigate the performance of SAM-Med2D in medical image segmentation across various modalities, anatomical structures, and organs. Concurrently, we validated the generalization capability of SAM-Med2D on 9 datasets from MICCAI 2023 challenge. Overall, our approach demonstrated significantly superior performance and generalization capability compared to SAM.) <|cite_end|>leverage adapter tuning on SAM's ViT encoder to better adapt it to the medical imaging domain. PerSAM <|cite_start|> (Reference: Personalize Segment Anything Model with One Shot: Driven by large-data pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful and promptable framework, revolutionizing the segmentation models. Despite the generality, customizing SAM for specific visual concepts without man-powered prompting is under explored, e.g., automatically segmenting your pet dog in different images. In this paper, we propose a training-free Personalization approach for SAM, termed as PerSAM. Given only a single image with a reference mask, PerSAM first localizes the target concept by a location prior, and segments it within other images or videos via three techniques: target-guided attention, target-semantic prompting, and cascaded post-refinement. In this way, we effectively adapt SAM for private use without any training. To further alleviate the mask ambiguity, we present an efficient one-shot fine-tuning variant, PerSAM-F. Freezing the entire SAM, we introduce two learnable weights for multi-scale masks, only training 2 parameters within 10 seconds for improved performance. To demonstrate our efficacy, we construct a new segmentation dataset, PerSeg, for personalized evaluation, and test our methods on video object segmentation with competitive performance. Besides, our approach can also enhance DreamBooth to personalize Stable Diffusion for text-to-image generation, which discards the background disturbance for better target appearance learning. Code is released at https://github.com/ZrrSkywalker/Personalize-SAM) <|cite_end|>and Matcher <|cite_start|> (Reference: Matcher: Segment Anything with One Shot Using All-Purpose Feature Matching: Powered by large-scale pre-training, vision foundation models exhibit significant potential in open-world image understanding. However, unlike large language models that excel at directly tackling various language tasks, vision foundation models require a task-specific model structure followed by fine-tuning on specific tasks. In this work, we present Matcher, a novel perception paradigm that utilizes off-the-shelf vision foundation models to address various perception tasks. Matcher can segment anything by using an in-context example without training. Additionally, we design three effective components within the Matcher framework to collaborate with these foundation models and unleash their full potential in diverse perception tasks. Matcher demonstrates impressive generalization performance across various segmentation tasks, all without training. For example, it achieves 52.7% mIoU on COCO-20$^i$ with one example, surpassing the state-of-the-art specialist model by 1.6%. In addition, Matcher achieves 33.0% mIoU on the proposed LVIS-92$^i$ for one-shot semantic segmentation, outperforming the state-of-the-art generalist model by 14.4%. Our visualization results further showcase the open-world generality and flexibility of Matcher when applied to images in the wild. Our code can be found at https://github.com/aim-uofa/Matcher.) <|cite_end|>introduce training-free segmentation frameworks based on SAM with one-shot learning. HQ-SAM <|cite_start|> (Reference: Segment Anything in High Quality: The recent Segment Anything Model (SAM) represents a big leap in scaling up segmentation models, allowing for powerful zero-shot capabilities and flexible prompting. Despite being trained with 1.1 billion masks, SAM's mask prediction quality falls short in many cases, particularly when dealing with objects that have intricate structures. We propose HQ-SAM, equipping SAM with the ability to accurately segment any object, while maintaining SAM's original promptable design, efficiency, and zero-shot generalizability. Our careful design reuses and preserves the pre-trained model weights of SAM, while only introducing minimal additional parameters and computation. We design a learnable High-Quality Output Token, which is injected into SAM's mask decoder and is responsible for predicting the high-quality mask. Instead of only applying it on mask-decoder features, we first fuse them with early and final ViT features for improved mask details. To train our introduced learnable parameters, we compose a dataset of 44K fine-grained masks from several sources. HQ-SAM is only trained on the introduced detaset of 44k masks, which takes only 4 hours on 8 GPUs. We show the efficacy of HQ-SAM in a suite of 10 diverse segmentation datasets across different downstream tasks, where 8 out of them are evaluated in a zero-shot transfer protocol. Our code and pretrained models are at https://github.com/SysCV/SAM-HQ.) <|cite_end|>enhances SAM's object segmentation quality while maintaining zero-shot generalizability by applying a learnable high-quality output token on fused features of size $256\times256$ to improve mask details. Despite extensive studies in various domains, there is a notable gap in research on text-centric segmentation. In this work, we introduce the first unified segmentation framework for four text hierarchies. We identify the primary limitation of applying SAM to fine-grained text segmentation in the mask feature size. Our approach employs a simple yet effective method to achieve high-quality text stroke segmentation by providing high-resolution mask features. <|paper_end|> | [
"<|reference_start|> LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking: Self-supervised pre-training techniques have achieved remarkable progress in Document AI. Most multimodal pre-trained models use a masked language modeling objective to learn bidirectional representations on the text modality, but they differ in pre-training objectives for the image modality. This discrepancy adds difficulty to multimodal representation learning. In this paper, we propose \\textbf{LayoutLMv3} to pre-train multimodal Transformers for Document AI with unified text and image masking. Additionally, LayoutLMv3 is pre-trained with a word-patch alignment objective to learn cross-modal alignment by predicting whether the corresponding image patch of a text word is masked. The simple unified architecture and training objectives make LayoutLMv3 a general-purpose pre-trained model for both text-centric and image-centric Document AI tasks. Experimental results show that LayoutLMv3 achieves state-of-the-art performance not only in text-centric tasks, including form understanding, receipt understanding, and document visual question answering, but also in image-centric tasks such as document image classification and document layout analysis. The code and models are publicly available at \\url{https://aka.ms/layoutlmv3}. <|reference_end|>",
"<|reference_start|> Towards End-to-End Unified Scene Text Detection and Layout Analysis: Scene text detection and document layout analysis have long been treated as two separate tasks in different image domains. In this paper, we bring them together and introduce the task of unified scene text detection and layout analysis. The first hierarchical scene text dataset is introduced to enable this novel research task. We also propose a novel method that is able to simultaneously detect scene text and form text clusters in a unified way. Comprehensive experiments show that our unified model achieves better performance than multiple well-designed baseline methods. Additionally, this model achieves state-of-the-art results on multiple scene text detection datasets without the need of complex post-processing. Dataset and code: https://github.com/google-research-datasets/hiertext and https://github.com/tensorflow/models/tree/master/official/projects/unified_detector. <|reference_end|>",
"<|reference_start|> Segment Anything: We introduce the Segment Anything (SA) project: a new task, model, and dataset for image segmentation. Using our efficient model in a data collection loop, we built the largest segmentation dataset to date (by far), with over 1 billion masks on 11M licensed and privacy respecting images. The model is designed and trained to be promptable, so it can transfer zero-shot to new image distributions and tasks. We evaluate its capabilities on numerous tasks and find that its zero-shot performance is impressive -- often competitive with or even superior to prior fully supervised results. We are releasing the Segment Anything Model (SAM) and corresponding dataset (SA-1B) of 1B masks and 11M images at https://segment-anything.com to foster research into foundation models for computer vision. <|reference_end|>",
"<|reference_start|> Matte Anything: Interactive Natural Image Matting with Segment Anything Models: Natural image matting algorithms aim to predict the transparency map (alpha-matte) with the trimap guidance. However, the production of trimap often requires significant labor, which limits the widespread application of matting algorithms on a large scale. To address the issue, we propose Matte Anything (MatAny), an interactive natural image matting model that could produce high-quality alpha-matte with various simple hints. The key insight of MatAny is to generate pseudo trimap automatically with contour and transparency prediction. In our work, we leverage vision foundation models to enhance the performance of natural image matting. Specifically, we use the segment anything model to predict high-quality contour with user interaction and an open-vocabulary detector to predict the transparency of any object. Subsequently, a pre-trained image matting model generates alpha mattes with pseudo trimaps. MatAny is the interactive matting algorithm with the most supported interaction methods and the best performance to date. It consists of orthogonal vision models without any additional training. We evaluate the performance of MatAny against several current image matting algorithms. MatAny has 58.3% improvement on MSE and 40.6% improvement on SAD compared to the previous image matting methods with simple guidance, achieving new state-of-the-art (SOTA) performance. The source codes and pre-trained models are available at https://github.com/hustvl/Matte-Anything. <|reference_end|>"
] | [
1,
3,
13,
14
] | {"<|multi_cite_1_1|>": "arxiv-306279", "<|multi_cite_1_2|>": "arxiv-490174", "<|multi_cite_1_3|>": "arxiv-564927", "<|cite_2|>": "arxiv-350100", "<|multi_cite_3_1|>": "arxiv-409019", "<|multi_cite_3_2|>": "arxiv-553084", "<|multi_cite_4_1|>": "ss-1532640", "<|multi_cite_4_2|>": "arxiv-306279", "<|multi_cite_4_3|>": "ss-2309447", "<|multi_cite_4_4|>": "ss-934193", "<|multi_cite_4_5|>": "arxiv-490174", "<|multi_cite_4_6|>": "arxiv-235351", "<|multi_cite_4_7|>": "arxiv-418802", "<|multi_cite_4_8|>": "arxiv-432787", "<|multi_cite_4_9|>": "arxiv-463401", "<|cite_5|>": "arxiv-409019", "<|cite_6|>": "ss-1265250", "<|cite_7|>": "ss-959617", "<|cite_8|>": "ss-778091", "<|cite_9|>": "arxiv-138435", "<|cite_10|>": "arxiv-306279", "<|cite_11|>": "ss-766533", "<|cite_12|>": "ss-737840", "<|cite_13|>": "arxiv-340424", "<|cite_14|>": "arxiv-409019", "<|cite_15|>": "arxiv-494904", "<|multi_cite_16_1|>": "arxiv-494904", "<|multi_cite_16_2|>": "arxiv-512145", "<|multi_cite_17_1|>": "arxiv-535385", "<|multi_cite_17_2|>": "arxiv-531722", "<|multi_cite_18_1|>": "arxiv-503958", "<|multi_cite_18_2|>": "arxiv-499380", "<|cite_20|>": "arxiv-409019", "<|cite_21|>": "arxiv-138435", "<|cite_22|>": "arxiv-306279", "<|multi_cite_23_1|>": "ss-2309447", "<|multi_cite_23_2|>": "ss-934194", "<|multi_cite_23_3|>": "ss-934193", "<|cite_24|>": "arxiv-490174", "<|multi_cite_25_1|>": "arxiv-110621", "<|multi_cite_25_2|>": "arxiv-144963", "<|multi_cite_26_1|>": "arxiv-198066", "<|multi_cite_26_2|>": "arxiv-161581", "<|multi_cite_26_3|>": "arxiv-218975", "<|multi_cite_26_4|>": "ss-710425", "<|multi_cite_26_5|>": "arxiv-235351", "<|multi_cite_26_6|>": "arxiv-357379", "<|cite_27|>": "arxiv-235351", "<|multi_cite_28_1|>": "ss-2290033", "<|multi_cite_28_2|>": "arxiv-410904", "<|multi_cite_28_3|>": "arxiv-432787", "<|multi_cite_29_1|>": "arxiv-250026", "<|multi_cite_29_2|>": "arxiv-336120", "<|cite_30|>": "ss-1121575", "<|cite_31|>": "ss-1401110", "<|multi_cite_32_1|>": "arxiv-413828", "<|multi_cite_32_2|>": "arxiv-342844", "<|cite_33|>": "arxiv-409019", "<|multi_cite_34_1|>": "arxiv-416268", "<|multi_cite_34_2|>": "arxiv-404065", "<|multi_cite_34_3|>": "arxiv-484790", "<|multi_cite_34_4|>": "arxiv-532590", "<|cite_35|>": "arxiv-323919", "<|cite_36|>": "arxiv-404065", "<|cite_37|>": "arxiv-484790", "<|cite_38|>": "arxiv-546700", "<|cite_39|>": "arxiv-494904", "<|cite_40|>": "arxiv-494904", "<|cite_41|>": "arxiv-513484", "<|cite_42|>": "ss-746777", "<|multi_cite_43_1|>": "arxiv-503958", "<|multi_cite_43_2|>": "arxiv-499380", "<|multi_cite_44_1|>": "arxiv-531722", "<|multi_cite_44_2|>": "ss-1472361", "<|multi_cite_45_1|>": "arxiv-499692", "<|multi_cite_45_2|>": "arxiv-500224", "<|multi_cite_45_3|>": "arxiv-535385", "<|cite_46|>": "arxiv-502248", "<|cite_47|>": "arxiv-507451", "<|cite_48|>": "arxiv-512145"} |
2406.05568 | <|paper_start|> Title: SAMM: Sharded Automated Market Maker
Abstract: SAMM: Sharded Automated Market Maker: Automated Market Makers (AMMs) are a cornerstone of decentralized finance. They are smart contracts (stateful programs) running on blockchains. They enable virtual token exchange: Traders swap tokens with the AMM for a fee, while liquidity providers supply liquidity and earn these fees. Demand for AMMs is growing rapidly, but our experiment-based estimates show that current architectures cannot meet the projected demand by 2029. This is because the execution of existing AMMs is non-parallelizable. We present SAMM, an AMM comprising multiple shards. All shards are AMMs running on the same chain, but their independence enables parallel execution. Unlike classical sharding solutions, here security relies on incentive compatibility. Therefore, SAMM introduces a novel fee design. Through analysis of Subgame-Perfect Nash Equilibria (SPNE), we show that SAMM incentivizes the desired behavior: Liquidity providers balance liquidity among all shards, overcoming destabilization attacks, and trades are evenly distributed. We validate our game-theoretic analysis with a simulation using real-world data. We evaluate SAMM by implementing and deploying it on local testnets of the Sui and Solana blockchains. To our knowledge, this is the first quantification of ``hot-contract'' performance. SAMM improves throughput by 5x and 16x, respectively, potentially more with better parallelization of the underlying blockchains. It is directly deployable, mitigating the upcoming scaling bottleneck.
Introduction
Decentralized Finance (DeFi) encompasses a variety of financial smart contracts operating on smart contract blockchain platforms.
Their users issue transactions (txs) to generate, loan, and exchange virtual digital tokens.
\emph{Automated Market Makers} (\textit{AMM}s) are a cornerstone of the DeFi ecosystem <|cite_start|> (Reference: Do automated market makers in DeFi ecosystem exhibit time-varying connectedness during stressed events?: We investigate the connectedness of automated market makers (AMM) that play a pivotal role in liquidity and ease of operations in the decentralized exchange (DEX). By applying the TVP-VAR model, our findings show higher level of connectivity during periods of turmoil (such as Delta, Omicron variants of SARS-Covid, and the Russia Ukraine conflict). Furthermore, risk transmission/reception is found to be independent of the platform on which they typically run (Ethereum based AMMs were both emitters as well as receivers). Pancake (a Binance based AMM) and Perpetual Protocol (Ethereum based AMM) emerged as moderate to high receivers of risk transmission, whereas all of the other AMMs, including Ethereum, were found to be risk emitters at varying degrees. We argue that AMMs typically depend on the underlying smart contracts. If the contract is flexible, AMMs can vary (either receiver or emitter), otherwise AMMs behave in tandem.) <|cite_end|> <|cite_start|> (Reference: Finding the Right Curve: Optimal Design of Constant Function Market Makers: Constant Function Market Makers (CFMMs) are a tool for creating exchange markets, have been deployed effectively in prediction markets, and are now especially prominent in the Decentralized Finance ecosystem. We show that for any set of beliefs about future asset prices, an optimal CFMM trading function exists that maximizes the fraction of trades that a CFMM can settle. We formulate a convex program to compute this optimal trading function. This program, therefore, gives a tractable framework for market-makers to compile their belief function on the future prices of the underlying assets into the trading function of a maximally capital-efficient CFMM. Our convex optimization framework further extends to capture the tradeoffs between fee revenue, arbitrage loss, and opportunity costs of liquidity providers. Analyzing the program shows how the consideration of profit and loss leads to a qualitatively different optimal trading function. Our model additionally explains the diversity of CFMM designs that appear in practice. We show that careful analysis of our convex program enables inference of a market-maker's beliefs about future asset prices, and show that these beliefs mirror the folklore intuition for several widely used CFMMs. Developing the program requires a new notion of the liquidity of a CFMM, and the core technical challenge is in the analysis of the KKT conditions of an optimization over an infinite-dimensional Banach space.) <|cite_end|>.
They enable users to immediately exchange between token pairs by maintaining \emph{liquidity pools}: tokens of both types supplied by other users serving as \emph{liquidity providers}.
The demand for AMMs grows rapidly: The prominent Uniswap <|cite_start|> (Reference: Uniswap v2 {Core: This technical whitepaper explains some of the design decisions behind the Uniswap v2 core contracts. It covers the contracts’ new features—including arbitrary pairs between ERC20s, a hardened price oracle that allows other contracts to estimate the time-weighted average price over a given interval, “flash swaps” that allow traders to receive assets and use them elsewhere before paying for them later in the transaction, and a protocol fee that can be turned on in the future. It also re-architects the contracts to reduce their attack surface. This whitepaper describes the mechanics of Uniswap v2’s “core” contracts including the pair contract that stores liquidity providers’ funds—and the factory contract used to instantiate pair contracts.) <|cite_end|> <|cite_start|> (Reference: Uniswap v3 Core: Uniswap v3 is a noncustodial automated market maker implemented for the Ethereum Virtual Machine. In comparison to earlier versions of the protocol, Uniswap v3 provides increased capital efficiency and fine-tuned control to liquidity providers, improves the accuracy and convenience of the price oracle, and has a more flexible fee structure.) <|cite_end|> exchanged \$1~trillion in its first~42 months of operation and an additional~\$1~trillion within only~24 months.
However, AMM throughput (tx per second, \emph{tps}) is limited due to the limits of the underlying blockchain.
If the current trend continues, by 2029 demand would surpass $200\textit{tps}$ (Appendix~\ref{app:growingdemand}).
Previous work~(\S\ref{sec:related}) all but removed the consensus protocol limitations on throughput (e.g., <|cite_start|> (Reference: Bitcoin-NG: A Scalable Blockchain Protocol: Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential.
This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem.
In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.) <|cite_end|> <|cite_start|> (Reference: Bullshark: DAG BFT Protocols Made Practical: We present Bullshark, the first directed acyclic graph (DAG) based asynchronous Byzantine Atomic Broadcast protocol that is optimized for the common synchronous case. Like previous DAG-based BFT protocols, Bullshark requires no extra communication to achieve consensus on top of building the DAG. That is, parties can totally order the vertices of the DAG by interpreting their local view of the DAG edges. Unlike other asynchronous DAG-based protocols, Bullshark provides a practical low latency fast-path that exploits synchronous periods and deprecates the need for notoriously complex view-change mechanisms. Bullshark achieves this while maintaining all the desired properties of its predecessor DAG-Rider. Namely, it has optimal amortized communication complexity, it provides fairness and asynchronous liveness, and safety is guaranteed even under a quantum adversary. In order to show the practicality and simplicity of our approach, we also introduce a standalone partially synchronous version of Bullshark which we evaluate against the state of the art. The implemented protocol is embarrassingly simple (200 LOC on top of an existing DAG-based mempool implementation (Narwhal & Tusk). It is highly efficient, achieving for example, 125,000 transaction per second with a 2 seconds latency for a deployment of 50 parties. In the same setting the state of the art pays a steep 50% latency increase as it optimizes for asynchrony.) <|cite_end|> <|cite_start|> (Reference: Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus: We propose separating the task of reliable transaction dissemination from transaction ordering, to enable high-performance Byzantine fault-tolerant quorum-based consensus. We design and evaluate a mempool protocol, Narwhal, specializing in high-throughput reliable dissemination and storage of causal histories of transactions. Narwhal tolerates an asynchronous network and maintains high performance despite failures. Narwhal is designed to easily scale-out using multiple workers at each validator, and we demonstrate that there is no foreseeable limit to the throughput we can achieve. Composing Narwhal with a partially synchronous consensus protocol (Narwhal-HotStuff) yields significantly better throughput even in the presence of faults or intermittent loss of liveness due to asynchrony. However, loss of liveness can result in higher latency. To achieve overall good performance when faults occur we design Tusk, a zero-message overhead asynchronous consensus protocol, to work with Narwhal. We demonstrate its high performance under a variety of configurations and faults. As a summary of results, on a WAN, Narwhal-Hotstuff achieves over 130,000 tx/sec at less than 2-sec latency compared with 1,800 tx/sec at 1-sec latency for Hotstuff. Additional workers increase throughput linearly to 600,000 tx/sec without any latency increase. Tusk achieves 160,000 tx/sec with about 3 seconds latency. Under faults, both protocols maintain high throughput, but Narwhal-HotStuff suffers from increased latency.) <|cite_end|> <|cite_start|> (Reference: Colordag: An Incentive-Compatible Blockchain: We present Colordag, a blockchain protocol where following the prescribed strategy is, with high probability, a best response as long as all miners have less than 1/2 of the mining power. We prove the correctness of Colordag even if there is an extremely powerful adversary who knows future actions of the scheduler: specifically, when agents will generate blocks and when messages will arrive. The state-of-the-art protocol, Fruitchain, is an epsilon-Nash equilibrium as long as all miners have less than 1/2 of the mining power. However, there is a simple deviation that guarantees that deviators are never worse off than they would be by following Fruitchain, and can sometimes do better. Thus, agents are motivated to deviate. Colordag implements a solution concept that we call epsilon-sure Nash equilibrium and does not suffer from this problem. Because it is an epsilon-sure Nash equilibrium, Colordag is an epsilon Nash equilibrium and with probability (1 - epsilon) is a best response.) <|cite_end|> <|cite_start|> (Reference: A Decentralized Blockchain with High Throughput and Fast Confirmation: This paper presents Conflux, a scalable and decentralized blockchain system with high throughput and fast confirmation. Conflux operates with a novel consensus protocol which optimistically processes concurrent blocks without discarding any as forks and adaptively assigns weights to blocks based on their topologies in the Conflux ledger structure (called Tree-Graph). The adaptive weight mechanism enables Conflux to detect and thwart liveness attack by automatically switching between an optimistic strategy for fast confirmation in normal scenarios and a conservative strategy to ensure consensus progress during liveness attacks. We evaluated Conflux on Amazon EC2 clusters with up to 12k full nodes. The consensus protocol of Conflux achieves a block throughput of 9.6Mbps with 20Mbps network bandwidth limit per node. On a combined workload of payment transactions and Ethereum history transactions, the end-to-end system of Conflux achieves the throughput of up to 3480 transactions per second while confirming transactions under one minute.) <|cite_end|> <|cite_start|> (Reference: Scalable and Probabilistic Leaderless BFT Consensus through Metastability: This paper introduces a family of leaderless Byzantine fault tolerance protocols, built around a metastable mechanism via network subsampling. These protocols provide a strong probabilistic safety guarantee in the presence of Byzantine adversaries while their concurrent and leaderless nature enables them to achieve high throughput and scalability. Unlike blockchains that rely on proof-of-work, they are quiescent and green. Unlike traditional consensus protocols where one or more nodes typically process linear bits in the number of total nodes per decision, no node processes more than logarithmic bits. It does not require accurate knowledge of all participants and exposes new possible tradeoffs and improvements in safety and liveness for building consensus protocols. The paper describes the Snow protocol family, analyzes its guarantees, and describes how it can be used to construct the core of an internet-scale electronic payment system called Avalanche, which is evaluated in a large scale deployment. Experiments demonstrate that the system can achieve high throughput (3400 tps), provide low confirmation latency (1.35 sec), and scale well compared to existing systems that deliver similar functionality. For our implementation and setup, the bottleneck of the system is in transaction verification.) <|cite_end|>).
Subsequent work addresses execution throughput by employing parallel processing <|cite_start|> (Reference: Adding Concurrency to Smart Contracts: Modern cryptocurrency systems, such as Ethereum, permit complex financial transactions through scripts called smart contracts. These smart contracts are executed many, many times, always without real concurrency. First, all smart contracts are serially executed by miners before appending them to the blockchain. Later, those contracts are serially re-executed by validators to verify that the smart contracts were executed correctly by miners. Serial execution limits system throughput and fails to exploit today's concurrent multicore and cluster architectures. Nevertheless, serial execution appears to be required: contracts share state, and contract programming languages have a serial semantics. This paper presents a novel way to permit miners and validators to execute smart contracts in parallel, based on techniques adapted from software transactional memory. Miners execute smart contracts speculatively in parallel, allowing non-conflicting contracts to proceed concurrently, and "discovering" a serializable concurrent schedule for a block's transactions, This schedule is captured and encoded as a deterministic fork-join program used by validators to re-execute the miner's parallel schedule deterministically but concurrently. Smart contract benchmarks run on a JVM with ScalaSTM show that a speedup of of 1.33x can be obtained for miners and 1.69x for validators with just three concurrent threads.) <|cite_end|> <|cite_start|> (Reference: Sui Lutris: A Blockchain Combining Broadcast and Consensus: Sui Lutris is the first smart-contract platform to sustainably achieve sub-second finality. It achieves this significant decrease by employing consensusless agreement not only for simple payments but for a large variety of transactions. Unlike prior work, Sui Lutris neither compromises expressiveness nor throughput and can run perpetually without restarts. Sui Lutris achieves this by safely integrating consensuless agreement with a high-throughput consensus protocol that is invoked out of the critical finality path but ensures that when a transaction is at risk of inconsistent concurrent accesses, its settlement is delayed until the total ordering is resolved. Building such a hybrid architecture is especially delicate during reconfiguration events, where the system needs to preserve the safety of the consensusless path without compromising the long-term liveness of potentially misconfigured clients. We thus develop a novel reconfiguration protocol, the first to provably show the safe and efficient reconfiguration of a consensusless blockchain. Sui Lutris is currently running in production and underpins the Sui smart-contract platform. Combined with the use of Objects instead of accounts it enables the safe execution of smart contracts that expose objects as a first-class resource. In our experiments Sui Lutris achieves latency lower than 0.5 seconds for throughput up to 5,000 certificates per second (150k ops/s with transaction blocks), compared to the state-of-the-art real-world consensus latencies of 3 seconds. Furthermore, it gracefully handles validators crash-recovery and does not suffer visible performance degradation during reconfiguration.) <|cite_end|>.
However, AMMs necessitate sequential handling of transactions since the outcome of each transaction depends on the current state of the AMM and, in turn, alters this state.
Therefore, AMM operations need to be serialized, not executed in parallel.
For the first time (to the best of our knowledge), we show AMM performance does not scale in a state-of-the-art blockchain system, namely Sui <|cite_start|> (Reference: Sui Lutris: A Blockchain Combining Broadcast and Consensus: Sui Lutris is the first smart-contract platform to sustainably achieve sub-second finality. It achieves this significant decrease by employing consensusless agreement not only for simple payments but for a large variety of transactions. Unlike prior work, Sui Lutris neither compromises expressiveness nor throughput and can run perpetually without restarts. Sui Lutris achieves this by safely integrating consensuless agreement with a high-throughput consensus protocol that is invoked out of the critical finality path but ensures that when a transaction is at risk of inconsistent concurrent accesses, its settlement is delayed until the total ordering is resolved. Building such a hybrid architecture is especially delicate during reconfiguration events, where the system needs to preserve the safety of the consensusless path without compromising the long-term liveness of potentially misconfigured clients. We thus develop a novel reconfiguration protocol, the first to provably show the safe and efficient reconfiguration of a consensusless blockchain. Sui Lutris is currently running in production and underpins the Sui smart-contract platform. Combined with the use of Objects instead of accounts it enables the safe execution of smart contracts that expose objects as a first-class resource. In our experiments Sui Lutris achieves latency lower than 0.5 seconds for throughput up to 5,000 certificates per second (150k ops/s with transaction blocks), compared to the state-of-the-art real-world consensus latencies of 3 seconds. Furthermore, it gracefully handles validators crash-recovery and does not suffer visible performance degradation during reconfiguration.) <|cite_end|>, and the throughput is limited by a single CPU core (Figure~\ref{fig:multiple_latency}, ${n=1}$) at $214\textit{tps}$.
Since core improvement is slow <|cite_start|> (Reference: Computing performance: Game over or next level?: The end of dramatic exponential growth in single-processor performance marks the end of the dominance of the single microproessor in computing. The era of sequential computing must give way to an era in which parallelism holds the forefront. Although important scientific and engineering challenges lie ahead, this is an opportune time for innovation in programming systems and computing architectures.) <|cite_end|>, by 2029 even Sui would not be able to satisfy AMM demand.
\begin{figure}[b]
\centering
\includegraphics[width=0.47\textwidth]{figs/latency_tps.png}
\caption{Trade transaction latency as a function of demand with~$n$ SAMM shards.}
\label{fig:multiple_latency}
\end{figure}
In this work, we address the throughput limitation of AMMs by using multiple AMM instances called \emph{shards}.
We model the system~(\S\ref{sec:model}) as a set of AMM shards and rational users of two kinds.
\emph{Traders} purchase tokens, they use the available AMMs and pay fees as required, aiming to minimize their expenses.
\emph{Liquidity providers} deposit tokens into AMMs and earn fees based on their contribution.
The shards are AMMs based on the standard Constant Product Market Maker (CPMM) contract~(\S\ref{sec:preliminaries}).
Roughly, the contract maintains the product of the two tokens constant after each trade.
Thus, purchasing a larger amount of a token increases its unit cost, an effect called \emph{slippage}.
We present SAMM~(\S\ref{sec:samm}), an AMM protocol that uses multiple \emph{shards}, all of which are AMM smart contracts operating on the same blockchain.
Ideally, the shards should be \emph{balanced}, i.e., have equal liquidity (deposited amounts), and traders should randomly select a shard to complete each trade.
The model gives rise to a game~(\S\ref{sec:game}) played among the users.
In each step, either a liquidity provider adds liquidity to a subset of the shards, or a trader executes a trade using a subset of the shards.
We assume myopic liquidity provider behavior, reducing the analysis to a Stackelberg game where the liquidity provider adds liquidity to maximize her revenue from a subsequent trade.
We observe that naively using a set of independent CPMMs results in all trades being split among all CPMMs, increasing system overhead without improving throughput.
To overcome this, rather than using a set fee ratio (as in all previous work we are aware of), we set a range of possible fees.
Within this range, SAMM uses a \emph{trading fee function} that encourages traders to use the smallest shard.
Our analysis~(\S\ref{sec:gtAnalysis}) shows that, indeed, in all best responses, traders use one of the smallest pools.
This, in turn, implies that not filling the smallest pool is not the best response for a liquidity provider.
We provide specific strategies for traders and liquidity providers that form a subgame perfect equilibrium.
We also show that, once the system reaches the balanced state, it will stay in that state.
To evaluate SAMM (\S\ref{sec:evaluation}), we implemented the protocol and deployed it to a local test network of the Sui blockchain platform <|cite_start|> (Reference: Sui Lutris: A Blockchain Combining Broadcast and Consensus: Sui Lutris is the first smart-contract platform to sustainably achieve sub-second finality. It achieves this significant decrease by employing consensusless agreement not only for simple payments but for a large variety of transactions. Unlike prior work, Sui Lutris neither compromises expressiveness nor throughput and can run perpetually without restarts. Sui Lutris achieves this by safely integrating consensuless agreement with a high-throughput consensus protocol that is invoked out of the critical finality path but ensures that when a transaction is at risk of inconsistent concurrent accesses, its settlement is delayed until the total ordering is resolved. Building such a hybrid architecture is especially delicate during reconfiguration events, where the system needs to preserve the safety of the consensusless path without compromising the long-term liveness of potentially misconfigured clients. We thus develop a novel reconfiguration protocol, the first to provably show the safe and efficient reconfiguration of a consensusless blockchain. Sui Lutris is currently running in production and underpins the Sui smart-contract platform. Combined with the use of Objects instead of accounts it enables the safe execution of smart contracts that expose objects as a first-class resource. In our experiments Sui Lutris achieves latency lower than 0.5 seconds for throughput up to 5,000 certificates per second (150k ops/s with transaction blocks), compared to the state-of-the-art real-world consensus latencies of 3 seconds. Furthermore, it gracefully handles validators crash-recovery and does not suffer visible performance degradation during reconfiguration.) <|cite_end|>.
SAMM achieves over a fivefold throughput increase compared to a standard single-contract AMM.
Figure~\ref{fig:multiple_latency} shows that with more shards, SAMM achieves higher throughput (X axis) with lower trimmed-mean latency (Y axis).
Error bars show additional experiments, X marking failure due to overload.
This increase is limited by the serial elements of Sui's transaction processing, following Amdahl's Law.
Finally, we confirm the theoretical analysis by simulating trades from real data and observe that (1) traders follow the desired behavior and (2) SAMM significantly improves the liquidity providers' revenue with a minor increase in the traders' costs due to enhanced throughput that allows for more trades.
In summary, our contributions are: (1) Identification of the performance challenges due to ``hot'' contracts, (2) generalization of the trading fee function of AMMs, (3) SAMM: sharded AMM contract with a novel trading fee function to incentivize the desired behavior, (4) game-theoretic analysis showing Subgame-Perfect Nash Equilibrium (SPNE), (5) evaluation in Sui, demonstrating a fivefold increase in throughput (up to the blockchain's limit), and (6)~simulation with real trade data confirming the theoretical analysis.
These results hint at an upcoming challenge~(\S\ref{sec:conclusion}) in smart contract platform design: minimizing the serial elements of transaction processing.
But SAMM can already be employed to scale AMM performance both for direct usage and as part of DeFi smart contracts.
Related Work
\label{sec:related}
The introduction of the constant product market maker (CPMM) model by Uniswap v1 set a new standard for AMMs, employing a liquidity pool and an algorithm designed to keep the product of the token balances constant.
This approach enabled asset exchanges without relying on traditional order books.
Subsequent iterations, Uniswap v2 <|cite_start|> (Reference: Uniswap v2 {Core: This technical whitepaper explains some of the design decisions behind the Uniswap v2 core contracts. It covers the contracts’ new features—including arbitrary pairs between ERC20s, a hardened price oracle that allows other contracts to estimate the time-weighted average price over a given interval, “flash swaps” that allow traders to receive assets and use them elsewhere before paying for them later in the transaction, and a protocol fee that can be turned on in the future. It also re-architects the contracts to reduce their attack surface. This whitepaper describes the mechanics of Uniswap v2’s “core” contracts including the pair contract that stores liquidity providers’ funds—and the factory contract used to instantiate pair contracts.) <|cite_end|> and v3 <|cite_start|> (Reference: Uniswap v3 Core: Uniswap v3 is a noncustodial automated market maker implemented for the Ethereum Virtual Machine. In comparison to earlier versions of the protocol, Uniswap v3 provides increased capital efficiency and fine-tuned control to liquidity providers, improves the accuracy and convenience of the price oracle, and has a more flexible fee structure.) <|cite_end|>, further developed the CPMM model by improving the price oracle mechanism and the returns for liquidity providers, respectively.
As a result, the CPMM algorithm has become a benchmark, with many AMMs adopting similar trading mechanisms.
Academic research has primarily concentrated on theoretical models, utility optimization, and security issues surrounding AMMs.
Angeris et al. <|cite_start|> (Reference: Improved {{Price Oracles}}: {{Constant Function Market Makers}}: Automated market makers, first popularized by Hanson's logarithmic market scoring rule (or LMSR) for prediction markets, have become important building blocks, called 'primitives,' for decentralized finance. A particularly useful primitive is the ability to measure the price of an asset, a problem often known as the pricing oracle problem. In this paper, we focus on the analysis of a very large class of automated market makers, called constant function market makers (or CFMMs) which includes existing popular market makers such as Uniswap, Balancer, and Curve, whose yearly transaction volume totals to billions of dollars. We give sufficient conditions such that, under fairly general assumptions, agents who interact with these constant function market makers are incentivized to correctly report the price of an asset and that they can do so in a computationally efficient way. We also derive several other useful properties that were previously not known. These include lower bounds on the total value of assets held by CFMMs and lower bounds guaranteeing that no agent can, by any set of trades, drain the reserves of assets held by a given CFMM.) <|cite_end|> expanded the understanding of AMMs by delving into constant function market makers (CFMMs), demonstrating their utility as decentralized price oracles and broadening the CPMM model's application.
Following this, research has increasingly focused on trading utility maximization <|cite_start|> (Reference: Optimal Routing for Constant Function Market Makers: We consider the problem of optimally executing an order involving multiple crypto-assets, sometimes called tokens, on a network of multiple constant function market makers (CFMMs). When we ignore the fixed cost associated with executing an order on a CFMM, this optimal routing problem can be cast as a convex optimization problem, which is computationally tractable. When we include the fixed costs, the optimal routing problem is a mixed-integer convex problem, which can be solved using (sometimes slow) global optimization methods, or approximately solved using various heuristics based on convex optimization. The optimal routing problem includes as a special case the problem of identifying an arbitrage present in a network of CFMMs, or certifying that none exists.) <|cite_end|>, advanced arbitrage techniques <|cite_start|> (Reference: High-Frequency Trading on Decentralized On-Chain Exchanges: Decentralized exchanges (DEXs) allow parties to participate in financial markets while retaining full custody of their funds. However, the transparency of blockchain-based DEX in combination with the latency for transactions to be processed, makes market-manipulation feasible. For instance, adversaries could perform front-running -- the practice of exploiting (typically non-public) information that may change the price of an asset for financial gain. In this work we formalize, analytically exposit and empirically evaluate an augmented variant of front-running: sandwich attacks, which involve front- and back-running victim transactions on a blockchain-based DEX. We quantify the probability of an adversarial trader being able to undertake the attack, based on the relative positioning of a transaction within a blockchain block. We find that a single adversarial trader can earn a daily revenue of over several thousand USD when performing sandwich attacks on one particular DEX -- Uniswap, an exchange with over 5M USD daily trading volume by June 2020. In addition to a single-adversary game, we simulate the outcome of sandwich attacks under multiple competing adversaries, to account for the real-world trading environment.) <|cite_end|> <|cite_start|> (Reference: Routing MEV in Constant Function Market Makers: ) <|cite_end|> <|cite_start|> (Reference: Maximizing Extractable Value from Automated Market Makers: Automated Market Makers (AMMs) are decentralized applications that allow users to exchange crypto-tokens without the need for a matching exchange order. AMMs are one of the most successful DeFi use cases: indeed, major AMM platforms process a daily volume of transactions worth USD billions. Despite their popularity, AMMs are well-known to suffer from transaction-ordering issues: adversaries can influence the ordering of user transactions, and possibly front-run them with their own, to extract value from AMMs, to the detriment of users. We devise an effective procedure to construct a strategy through which an adversary can maximize the value extracted from user transactions.) <|cite_end|> <|cite_start|> (Reference: n-MVTL Attack: Optimal Transaction Reordering Attack on DeFi: ) <|cite_end|>, improving liquidity providers' returns <|cite_start|> (Reference: {Automated Market Making and Loss-Versus-Rebalancing: We consider the market microstructure of automated market makers (AMMs) from the perspective of liquidity providers (LPs). Our central contribution is a ``Black-Scholes formula for AMMs''. We identify the main adverse selection cost incurred by LPs, which we call ``loss-versus-rebalancing'' (LVR, pronounced ``lever''). LVR captures costs incurred by AMM LPs due to stale prices that are picked off by better informed arbitrageurs. We derive closed-form expressions for LVR applicable to all automated market makers. Our model is quantitatively realistic, matching actual LP returns empirically, and shows how CFMM protocols can be redesigned to reduce or eliminate LVR.) <|cite_end|> <|cite_start|> (Reference: Finding the Right Curve: Optimal Design of Constant Function Market Makers: Constant Function Market Makers (CFMMs) are a tool for creating exchange markets, have been deployed effectively in prediction markets, and are now especially prominent in the Decentralized Finance ecosystem. We show that for any set of beliefs about future asset prices, an optimal CFMM trading function exists that maximizes the fraction of trades that a CFMM can settle. We formulate a convex program to compute this optimal trading function. This program, therefore, gives a tractable framework for market-makers to compile their belief function on the future prices of the underlying assets into the trading function of a maximally capital-efficient CFMM. Our convex optimization framework further extends to capture the tradeoffs between fee revenue, arbitrage loss, and opportunity costs of liquidity providers. Analyzing the program shows how the consideration of profit and loss leads to a qualitatively different optimal trading function. Our model additionally explains the diversity of CFMM designs that appear in practice. We show that careful analysis of our convex program enables inference of a market-maker's beliefs about future asset prices, and show that these beliefs mirror the folklore intuition for several widely used CFMMs. Developing the program requires a new notion of the liquidity of a CFMM, and the core technical challenge is in the analysis of the KKT conditions of an optimization over an infinite-dimensional Banach space.) <|cite_end|>, ensuring transaction privacy <|cite_start|> (Reference: Differential Privacy in Constant Function Market Makers: ) <|cite_end|>, eliminating Miner Extractable Value (MEV) for fair trades <|cite_start|> (Reference: Mechanism Design for Automated Market Makers: Blockchains have popularized automated market makers (AMMs). An AMM exchange is an application running on a blockchain which maintains a pool of crypto-assets and automatically trades assets with users governed by some pricing function that prices the assets based on their relative demand/supply. AMMs have created an important challenge commonly known as the Miner Extractable Value (MEV). In particular, the miners who control the contents and ordering of transactions in a block can extract value by front-running and back-running users' transactions, leading to arbitrage opportunities that guarantee them risk-free returns. In this paper, we consider how to design AMM mechanisms that eliminate MEV opportunities. Specifically, we propose a new AMM mechanism that processes all transactions contained within a block in a batch. We show that our new mechanism satisfies two tiers of guarantees. First, for legacy blockchains where each block is proposed by a single (possibly rotating) miner, we prove that our mechanism satisfies arbitrage resilience, i.e., a miner cannot gain risk-free profit. Moreover, we also guarantee fair treatment among all transactions within the same block, such that the miner is unable to sell off favorable positions in the block to users or arbitragers. Second, for blockchains where the block proposal process is decentralized and offers sequencing-fairness, we prove a stronger notion called incentive compatibility -- roughly speaking, we guarantee that any individual user's best response is to follow the honest strategy.) <|cite_end|> <|cite_start|> (Reference: Credible Decentralized Exchange Design via Verifiable Sequencing Rules: Trading on decentralized exchanges has been one of the primary use cases for permissionless blockchains with daily trading volume exceeding billions of U.S.~dollars. In the status quo, users broadcast transactions and miners are responsible for composing a block of transactions and picking an execution ordering -- the order in which transactions execute in the exchange. Due to the lack of a regulatory framework, it is common to observe miners exploiting their privileged position by front-running transactions and obtaining risk-fee profits. In this work, we propose to modify the interaction between miners and users and initiate the study of {\em verifiable sequencing rules}. As in the status quo, miners can determine the content of a block; however, they commit to respecting a sequencing rule that constrains the execution ordering and is verifiable (there is a polynomial time algorithm that can verify if the execution ordering satisfies such constraints). Thus in the event a miner deviates from the sequencing rule, anyone can generate a proof of non-compliance. We ask if there are sequencing rules that limit price manipulation from miners in a two-token liquidity pool exchange. Our first result is an impossibility theorem: for any sequencing rule, there is an instance of user transactions where the miner can obtain non-zero risk-free profits. In light of this impossibility result, our main result is a verifiable sequencing rule that provides execution price guarantees for users. In particular, for any user transaction A, it ensures that either (1) the execution price of A is at least as good as if A was the only transaction in the block, or (2) the execution price of A is worse than this ``standalone'' price and the miner does not gain (or lose) when including A in the block.) <|cite_end|> <|cite_start|> (Reference: Batching Trades on Automated Market Makers: We consider an automated market maker (AMM) in which all trades are batched and executed at a price equal to the marginal price (i) <|cite_end|>, and examining the synergy between blockchain-based AMMs and prediction market mechanisms <|cite_start|> (Reference: Axioms for Constant Function Market Makers: We study axiomatic foundations for different classes of constant-function automated market makers (CFMMs). We focus particularly on separability and on different invariance properties under scaling. Our main results are an axiomatic characterization of a natural generalization of constant product market makers (CPMMs), popular in decentralized finance, on the one hand, and a characterization of the Logarithmic Scoring Rule Market Makers (LMSR), popular in prediction markets, on the other hand. The first class is characterized by the combination of independence and scale invariance, whereas the second is characterized by the combination of independence and translation invariance. The two classes are therefore distinguished by a different invariance property that is motivated by different interpretations of the num\'eraire in the two applications. However, both are pinned down by the same separability property. Moreover, we characterize the CPMM as an extremal point within the class of scale invariant, independent, symmetric AMMs with non-concentrated liquidity provision. Our results add to a formal analysis of mechanisms that are currently used for decentralized exchanges and connect the most popular class of DeFi AMMs to the most popular class of prediction market AMMs.) <|cite_end|>.
To the best of our knowledge, previous work did not address AMM throughput scaling.
Like other smart contracts, AMMs' throughput is limited by the blockchain's constraints.
AMM contracts can be deployed on so-called layer-2 solutions <|cite_start|> (Reference: Layer 2 be or Layer not 2 be: Scaling on Uniswap v3: This paper studies the market structure impact of cheaper and faster chains on the Uniswap v3 Protocol. The Uniswap Protocol is the largest decentralized application on Ethereum by both gas and blockspace used, and user behaviors of the protocol are very sensitive to fluctuations in gas prices and market structure due to the economic factors of the Protocol. We focus on the chains where Uniswap v3 has the most activity, giving us the best comparison to Ethereum mainnet. Because of cheaper gas and lower block times, we find evidence that the majority of swaps get better gas-adjusted execution on these chains, liquidity providers are more capital efficient, and liquidity providers have increased fee returns from more arbitrage. We also present evidence that two second block times may be too long for optimal liquidity provider returns, compared to first come, first served. We argue that many of the current drawbacks with AMMs may be due to chain dynamics and are vastly improved with cheaper and faster transactions) <|cite_end|> (e.g., ZkSwap and QuickSwap), but this merely creates a separate environment for AMM contracts, with scaling issues persisting within this realm.
Thus, the efficiency of AMM contracts largely depends on improvements in the underlying blockchain.
The first generation of blockchains, starting from Bitcoin <|cite_start|> (Reference: Bitcoin: A Peer-to-Peer electronic cash system: 原文作者:中本聪(Satoshi Nakamoto) 翻译:Bitcoinblogger.com 独家赞助 作者邮箱:[email protected] www.bitcoin.org [摘要]:本文提出了一种完全通过点对点技术实现的电子现金系统,它使得在线支付 能够直接由一方发起并支付给另外一方,中间不需要通过任何的金融机构。虽然数 字签名(Digital signatures)部分解决了这个问题,但是如果仍然需要第三方的支持 才能防止双重支付(double-spending)的话,那么这种系统也就失去了存在的价值。 我们(we)在此提出一种解决方案,使现金系统在点对点的环境下运行,并防止双重支 付问题。该网络通过随机散列(hashing)对全部交易加上时间戳(timestamps), 将它们合并入一个不断延伸的基于随机散列的工作量证明(proof-of-work)的链条作 为交易记录,除非重新完成全部的工作量证明,形成的交易记录将不可更改。最长 的链条不仅将作为被观察到的事件序列(sequence)的证明,而且被看做是来自 CPU 计算能力最大的池(pool)。只要大多数的 CPU 计算能力都没有打算合作起来对全 网进行攻击,那么诚实的节点将会生成最长的、超过攻击者的链条。这个系统本身 需要的基础设施非常少。信息尽最大努力在全网传播即可,节点(nodes)可以随时离 开和重新加入网络,并将最长的工作量证明链条作为在该节点离线期间发生的交易 的证明。) <|cite_end|>, suffered from throughput limitations due to their consensus protocols.
However, a body of work overcame this limitation using a variety of protocols <|cite_start|> (Reference: Bitcoin-NG: A Scalable Blockchain Protocol: Cryptocurrencies, based on and led by Bitcoin, have shown promise as infrastructure for pseudonymous online payments, cheap remittance, trustless digital asset exchange, and smart contracts. However, Bitcoin-derived blockchain protocols have inherent scalability limits that trade-off between throughput and latency and withhold the realization of this potential.
This paper presents Bitcoin-NG, a new blockchain protocol designed to scale. Based on Bitcoin's blockchain protocol, Bitcoin-NG is Byzantine fault tolerant, is robust to extreme churn, and shares the same trust model obviating qualitative changes to the ecosystem.
In addition to Bitcoin-NG, we introduce several novel metrics of interest in quantifying the security and efficiency of Bitcoin-like blockchain protocols. We implement Bitcoin-NG and perform large-scale experiments at 15% the size of the operational Bitcoin system, using unchanged clients of both protocols. These experiments demonstrate that Bitcoin-NG scales optimally, with bandwidth limited only by the capacity of the individual nodes and latency limited only by the propagation time of the network.) <|cite_end|> <|cite_start|> (Reference: NC-Max: Breaking the Security-Performance Tradeoff in Nakamoto Consensus: —First implemented in Bitcoin, Nakamoto Consensus (NC) is the most influential consensus protocol in cryptocurrencies despite all the alternative protocols designed afterward. Nevertheless, NC is trapped by a security-performance tradeoff. While existing efforts mostly attempt to break this tradeoff via abandoning or adjusting NC’s backbone protocol, we alternatively forward the relevance of the network layer. We identify and experimentally prove that the crux resides with the pro-longed block propagation latency caused by not-yet-propagated transactions. We thus present a two-step mechanism to confirm only fully-propagated transactions, and therefore remove the limits upon NC’s performance imposed by its security demands, realizing NC’s untapped potential. Implementing this two-step mechanism, we propose NC-Max, whose (1) security is analyzed, proving that it provides stronger resistance than NC against transaction withholding attacks, and (2) performance is evaluated, showing that it exhausts the full throughput supported by the network, and shortens the transaction confirmation latency by 3.0 to 6.6 times compared to NC without compromising security. NC-Max is implemented in Nervos CKB, a public permissionless blockchain.) <|cite_end|> <|cite_start|> (Reference: Solana : A new architecture for a high performance blockchain v 0 . 8: This paper proposes a new blockchain architecture based on Proof of History (PoH) a proof for verifying order and passage of time between events. PoH is used to encode trustless passage of time into a ledger an append only data structure. When used alongside a consensus algorithm such as Proof of Work (PoW) or Proof of Stake (PoS), PoH can reduce messaging overhead in a Byzantine Fault Tolerant replicated state machine, resulting in sub-second finality times. This paper also proposes two algorithms that leverage the time keeping properties of the PoH ledger a PoS algorithm that can recover from partitions of any size and an efficient streaming Proof of Replication (PoRep). The combination of PoRep and PoH provides a defense against forgery of the ledger with respect to time (ordering) and storage. The protocol is analyzed on a 1 gbps network, and this paper shows that throughput up to 710k transactions per second is possible with today’s hardware.) <|cite_end|> <|cite_start|> (Reference: Bullshark: DAG BFT Protocols Made Practical: We present Bullshark, the first directed acyclic graph (DAG) based asynchronous Byzantine Atomic Broadcast protocol that is optimized for the common synchronous case. Like previous DAG-based BFT protocols, Bullshark requires no extra communication to achieve consensus on top of building the DAG. That is, parties can totally order the vertices of the DAG by interpreting their local view of the DAG edges. Unlike other asynchronous DAG-based protocols, Bullshark provides a practical low latency fast-path that exploits synchronous periods and deprecates the need for notoriously complex view-change mechanisms. Bullshark achieves this while maintaining all the desired properties of its predecessor DAG-Rider. Namely, it has optimal amortized communication complexity, it provides fairness and asynchronous liveness, and safety is guaranteed even under a quantum adversary. In order to show the practicality and simplicity of our approach, we also introduce a standalone partially synchronous version of Bullshark which we evaluate against the state of the art. The implemented protocol is embarrassingly simple (200 LOC on top of an existing DAG-based mempool implementation (Narwhal & Tusk). It is highly efficient, achieving for example, 125,000 transaction per second with a 2 seconds latency for a deployment of 50 parties. In the same setting the state of the art pays a steep 50% latency increase as it optimizes for asynchrony.) <|cite_end|> <|cite_start|> (Reference: Narwhal and Tusk: A DAG-based Mempool and Efficient BFT Consensus: We propose separating the task of reliable transaction dissemination from transaction ordering, to enable high-performance Byzantine fault-tolerant quorum-based consensus. We design and evaluate a mempool protocol, Narwhal, specializing in high-throughput reliable dissemination and storage of causal histories of transactions. Narwhal tolerates an asynchronous network and maintains high performance despite failures. Narwhal is designed to easily scale-out using multiple workers at each validator, and we demonstrate that there is no foreseeable limit to the throughput we can achieve. Composing Narwhal with a partially synchronous consensus protocol (Narwhal-HotStuff) yields significantly better throughput even in the presence of faults or intermittent loss of liveness due to asynchrony. However, loss of liveness can result in higher latency. To achieve overall good performance when faults occur we design Tusk, a zero-message overhead asynchronous consensus protocol, to work with Narwhal. We demonstrate its high performance under a variety of configurations and faults. As a summary of results, on a WAN, Narwhal-Hotstuff achieves over 130,000 tx/sec at less than 2-sec latency compared with 1,800 tx/sec at 1-sec latency for Hotstuff. Additional workers increase throughput linearly to 600,000 tx/sec without any latency increase. Tusk achieves 160,000 tx/sec with about 3 seconds latency. Under faults, both protocols maintain high throughput, but Narwhal-HotStuff suffers from increased latency.) <|cite_end|> and data structures <|cite_start|> (Reference: A Decentralized Blockchain with High Throughput and Fast Confirmation: This paper presents Conflux, a scalable and decentralized blockchain system with high throughput and fast confirmation. Conflux operates with a novel consensus protocol which optimistically processes concurrent blocks without discarding any as forks and adaptively assigns weights to blocks based on their topologies in the Conflux ledger structure (called Tree-Graph). The adaptive weight mechanism enables Conflux to detect and thwart liveness attack by automatically switching between an optimistic strategy for fast confirmation in normal scenarios and a conservative strategy to ensure consensus progress during liveness attacks. We evaluated Conflux on Amazon EC2 clusters with up to 12k full nodes. The consensus protocol of Conflux achieves a block throughput of 9.6Mbps with 20Mbps network bandwidth limit per node. On a combined workload of payment transactions and Ethereum history transactions, the end-to-end system of Conflux achieves the throughput of up to 3480 transactions per second while confirming transactions under one minute.) <|cite_end|> <|cite_start|> (Reference: Scalable and Probabilistic Leaderless BFT Consensus through Metastability: This paper introduces a family of leaderless Byzantine fault tolerance protocols, built around a metastable mechanism via network subsampling. These protocols provide a strong probabilistic safety guarantee in the presence of Byzantine adversaries while their concurrent and leaderless nature enables them to achieve high throughput and scalability. Unlike blockchains that rely on proof-of-work, they are quiescent and green. Unlike traditional consensus protocols where one or more nodes typically process linear bits in the number of total nodes per decision, no node processes more than logarithmic bits. It does not require accurate knowledge of all participants and exposes new possible tradeoffs and improvements in safety and liveness for building consensus protocols. The paper describes the Snow protocol family, analyzes its guarantees, and describes how it can be used to construct the core of an internet-scale electronic payment system called Avalanche, which is evaluated in a large scale deployment. Experiments demonstrate that the system can achieve high throughput (3400 tps), provide low confirmation latency (1.35 sec), and scale well compared to existing systems that deliver similar functionality. For our implementation and setup, the bottleneck of the system is in transaction verification.) <|cite_end|>.
With consensus constraints out of the way, the serial execution of blockchain transactions became the bottleneck.
Several works propose blockchain sharding <|cite_start|> (Reference: RapidChain: Scaling blockchain via full sharding: A major approach to overcoming the performance and scalability limitations of current blockchain protocols is to use sharding which is to split the overheads of processing transactions among multiple, smaller groups of nodes. These groups work in parallel to maximize performance while requiring significantly smaller communication, computation, and storage per node, allowing the system to scale to large networks. However, existing sharding-based blockchain protocols still require a linear amount of communication (in the number of participants) per transaction, and hence, attain only partially the potential benefits of sharding. We show that this introduces a major bottleneck to the throughput and latency of these protocols. Aside from the limited scalability, these protocols achieve weak security guarantees due to either a small fault resiliency (e.g., 1/8 and 1/4) or high failure probability, or they rely on strong assumptions (e.g., trusted setup) that limit their applicability to mainstream payment systems. We propose RapidChain, the first sharding-based public blockchain protocol that is resilient to Byzantine faults from up to a 1/3 fraction of its participants, and achieves complete sharding of the communication, computation, and storage overhead of processing transactions without assuming any trusted setup. RapidChain employs an optimal intra-committee consensus algorithm that can achieve very high throughputs via block pipelining, a novel gossiping protocol for large blocks, and a provably-secure reconfiguration mechanism to ensure robustness. Using an efficient cross-shard transaction verification technique, our protocol avoids gossiping transactions to the entire network. Our empirical evaluations suggest that RapidChain can process (and confirm) more than 7,300 tx/sec with an expected confirmation latency of roughly 8.7 seconds in a network of 4,000 nodes with an overwhelming time-to-failure of more than 4,500 years.) <|cite_end|> <|cite_start|> (Reference: OmniLedger: A secure, scale-out, decentralized ledger via sharding: Designing a secure permissionless distributed ledger (blockchain) that performs on par with centralized payment processors, such as Visa, is a challenging task. Most existing distributed ledgers are unable to scale-out, i.e., to grow their total processing capacity with the number of validators; and those that do, compromise security or decentralization. We present OmniLedger, a novel scale-out distributed ledger that preserves longterm security under permissionless operation. It ensures security and correctness by using a bias-resistant public-randomness protocol for choosing large, statistically representative shards that process transactions, and by introducing an efficient cross-shard commit protocol that atomically handles transactions affecting multiple shards. OmniLedger also optimizes performance via parallel intra-shard transaction processing, ledger pruning via collectively-signed state blocks, and low-latency "trust-but-verify" validation for low-value transactions. An evaluation of our experimental prototype shows that OmniLedger’s throughput scales linearly in the number of active validators, supporting Visa-level workloads and beyond, while confirming typical transactions in under two seconds.) <|cite_end|> <|cite_start|> (Reference: Monoxide: Scale out Blockchains with Asynchronous Consensus Zones: Cryptocurrencies have provided a promising infrastructure for pseudonymous online payments. However, low throughput has significantly hindered the scalability and usability of cryptocurrency systems for increasing numbers of users and transactions. Another obstacle to achieving scalability is the requirement for every node to duplicate the communication, storage, and state representation of the entire network. In this paper, we introduce the Asynchronous Consensus Zones, which scales blockchain system linearly without compromising decentralization or security. We achieve this by running multiple independent and parallel instances of single-chain consensus systems termed as zones. The consensus happens independently within each zone with minimized communication, which partitions the workload of the entire network and ensures a moderate burden for each individual node as the network grows. We propose eventual atomicity to ensure transaction atomicity across zones, which achieves the efficient completion of transactions without the overhead of a two-phase commit protocol. Additionally, we propose Chu-ko-nu mining to ensure the effective mining power in each zone to be at the same level of the entire network, making an attack on any individual zone as hard as that on the full network. Our experimental results show the effectiveness of our work: on a testbed including 1,200 virtual machines worldwide to support 48,000 nodes, our system delivers 1,000× throughput and 2,000× capacity over the Bitcoin and Ethereum networks.) <|cite_end|>, i.e., dividing the blockchain into smaller, interconnected chains (shards) allowing parallelism.
In a similar vein, \emph{layer-2 protocols} <|cite_start|> (Reference: SoK: Layer-Two Blockchain Protocols: ) <|cite_end|> outsource computation to a secondary protocol secured by the main blockchain.
However, both sharding and layer-2 solutions only parallelize independent contracts.
So an AMM does not benefit from blockchain sharding or layer-2 solutions, as it must be located in a single chain and processed sequentially.
Note that in SAMM we use multiple AMM contracts, which can be run on a single-shard blockchain, or in separate shards of a sharded blockchain.
An alternative approach identifies read and write set conflicts and parallelizes non-conflicting smart contract transaction execution <|cite_start|> (Reference: A Concurrent Perspective on Smart Contracts: In this paper, we explore remarkable similarities between multi-transactional behaviors of smart contracts in cryptocurrencies such as Ethereum and classical problems of shared-memory concurrency. We examine two real-world examples from the Ethereum blockchain and analyzing how they are vulnerable to bugs that are closely reminiscent to those that often occur in traditional concurrent programs. We then elaborate on the relation between observable contract behaviors and well-studied concurrency topics, such as atomicity, interference, synchronization, and resource ownership. The described contracts-as-concurrent-objects analogy provides deeper understanding of potential threats for smart contracts, indicate better engineering practices, and enable applications of existing state-of-the-art formal verification techniques.) <|cite_end|> <|cite_start|> (Reference: Adding Concurrency to Smart Contracts: Modern cryptocurrency systems, such as Ethereum, permit complex financial transactions through scripts called smart contracts. These smart contracts are executed many, many times, always without real concurrency. First, all smart contracts are serially executed by miners before appending them to the blockchain. Later, those contracts are serially re-executed by validators to verify that the smart contracts were executed correctly by miners. Serial execution limits system throughput and fails to exploit today's concurrent multicore and cluster architectures. Nevertheless, serial execution appears to be required: contracts share state, and contract programming languages have a serial semantics. This paper presents a novel way to permit miners and validators to execute smart contracts in parallel, based on techniques adapted from software transactional memory. Miners execute smart contracts speculatively in parallel, allowing non-conflicting contracts to proceed concurrently, and "discovering" a serializable concurrent schedule for a block's transactions, This schedule is captured and encoded as a deterministic fork-join program used by validators to re-execute the miner's parallel schedule deterministically but concurrently. Smart contract benchmarks run on a JVM with ScalaSTM show that a speedup of of 1.33x can be obtained for miners and 1.69x for validators with just three concurrent threads.) <|cite_end|> <|cite_start|> (Reference: Parallel and Asynchronous Smart Contract Execution: Today's blockchains suffer from low throughput and high latency, which impedes their widespread adoption of more complex applications like smart contracts. In this paper, we propose a novel paradigm for smart contract execution. It distinguishes between consensus nodes and execution nodes: different groups of execution nodes can execute transactions in parallel; meanwhile, consensus nodes can asynchronously order transactions and process execution results. Moreover, it requires no coordination among execution nodes and can effectively prevent livelocks. We show two ways of applying this paradigm to blockchains. First, we show how we can make Ethereum support parallel and asynchronous contract execution \emph{without hard-forks}. Then, we propose a new public, permissionless blockchain. Our benchmark shows that, with a fast consensus layer, it can provide a high throughput even for complex transactions like Cryptokitties gene mixing. It can also protect simple transactions from being starved by complex transactions.) <|cite_end|> <|cite_start|> (Reference: Practical smart contract sharding with ownership and commutativity analysis: Sharding is a popular way to achieve scalability in blockchain protocols, increasing their throughput by partitioning the set of transaction validators into a number of smaller committees, splitting the workload. Existing approaches for blockchain sharding, however, do not scale well when concurrent transactions alter the same replicated state component—a common scenario in Ethereum-style smart contracts. We propose a novel approach for efficiently sharding such transactions. It is based on a folklore idea: state-manipulating atomic operations that commute can be processed in parallel, with their cumulative result defined deterministically, while executing non-commuting operations requires one to own the state they alter. We present CoSplit—a static program analysis tool that soundly infers ownership and commutativity summaries for smart contracts and translates those summaries to sharding signatures that are used by the blockchain protocol to maximise parallelism. Our evaluation shows that using CoSplit introduces negligible overhead to the transaction validation cost, while the inferred signatures allow the system to achieve a significant increase in transaction processing throughput for real-world smart contracts.) <|cite_end|> <|cite_start|> (Reference: Utilizing Parallelism in Smart Contracts on Decentralized Blockchains by Taming Application-Inherent Conflicts: Traditional public blockchain systems typically had very limited transaction throughput because of the bottleneck of the consensus protocol itself. With recent advances in consensus technology, the performance limit has been greatly lifted, typically to thousands of transactions per second. With this, transaction execution has become a new performance bottleneck. Exploiting parallelism in transaction execution is a clear and direct way to address this and to further increase transaction throughput. Although some recent literature introduced concurrency control mechanisms to execute smart contract transactions in parallel, the reported speedup that they can achieve is far from ideal. The main reason is that the proposed parallel execution mechanisms cannot effectively deal with the conflicts inherent in many blockchain applications. In this work, we thoroughly study the historical transaction execution traces in Ethereum. We observe that application-inherent conflicts are the major factors that limit the exploitable parallelism during execution. We propose to use partitioned counters and special commutative instructions to break up the application conflict chains in order to maximize the potential speedup. When we evaluated the maximum parallel speedup achievable, these techniques doubled this limit to an 18x overall speedup compared to serial execution, thus approaching the optimum. We also propose OCC-DA, an optimistic concurrency control scheduler with deterministic aborts, which makes it possible to use OCC scheduling in public blockchain settings.) <|cite_end|> <|cite_start|> (Reference: Sui Lutris: A Blockchain Combining Broadcast and Consensus: Sui Lutris is the first smart-contract platform to sustainably achieve sub-second finality. It achieves this significant decrease by employing consensusless agreement not only for simple payments but for a large variety of transactions. Unlike prior work, Sui Lutris neither compromises expressiveness nor throughput and can run perpetually without restarts. Sui Lutris achieves this by safely integrating consensuless agreement with a high-throughput consensus protocol that is invoked out of the critical finality path but ensures that when a transaction is at risk of inconsistent concurrent accesses, its settlement is delayed until the total ordering is resolved. Building such a hybrid architecture is especially delicate during reconfiguration events, where the system needs to preserve the safety of the consensusless path without compromising the long-term liveness of potentially misconfigured clients. We thus develop a novel reconfiguration protocol, the first to provably show the safe and efficient reconfiguration of a consensusless blockchain. Sui Lutris is currently running in production and underpins the Sui smart-contract platform. Combined with the use of Objects instead of accounts it enables the safe execution of smart contracts that expose objects as a first-class resource. In our experiments Sui Lutris achieves latency lower than 0.5 seconds for throughput up to 5,000 certificates per second (150k ops/s with transaction blocks), compared to the state-of-the-art real-world consensus latencies of 3 seconds. Furthermore, it gracefully handles validators crash-recovery and does not suffer visible performance degradation during reconfiguration.) <|cite_end|>.
However, AMM transactions must be processed sequentially and do not benefit from this approach either. <|paper_end|> | [
"<|reference_start|> Adding Concurrency to Smart Contracts: Modern cryptocurrency systems, such as Ethereum, permit complex financial transactions through scripts called smart contracts. These smart contracts are executed many, many times, always without real concurrency. First, all smart contracts are serially executed by miners before appending them to the blockchain. Later, those contracts are serially re-executed by validators to verify that the smart contracts were executed correctly by miners. Serial execution limits system throughput and fails to exploit today's concurrent multicore and cluster architectures. Nevertheless, serial execution appears to be required: contracts share state, and contract programming languages have a serial semantics. This paper presents a novel way to permit miners and validators to execute smart contracts in parallel, based on techniques adapted from software transactional memory. Miners execute smart contracts speculatively in parallel, allowing non-conflicting contracts to proceed concurrently, and \"discovering\" a serializable concurrent schedule for a block's transactions, This schedule is captured and encoded as a deterministic fork-join program used by validators to re-execute the miner's parallel schedule deterministically but concurrently. Smart contract benchmarks run on a JVM with ScalaSTM show that a speedup of of 1.33x can be obtained for miners and 1.69x for validators with just three concurrent threads. <|reference_end|>",
"<|reference_start|> Optimal Routing for Constant Function Market Makers: We consider the problem of optimally executing an order involving multiple crypto-assets, sometimes called tokens, on a network of multiple constant function market makers (CFMMs). When we ignore the fixed cost associated with executing an order on a CFMM, this optimal routing problem can be cast as a convex optimization problem, which is computationally tractable. When we include the fixed costs, the optimal routing problem is a mixed-integer convex problem, which can be solved using (sometimes slow) global optimization methods, or approximately solved using various heuristics based on convex optimization. The optimal routing problem includes as a special case the problem of identifying an arbitrage present in a network of CFMMs, or certifying that none exists. <|reference_end|>",
"<|reference_start|> Routing MEV in Constant Function Market Makers: <|reference_end|>",
"<|reference_start|> Sui Lutris: A Blockchain Combining Broadcast and Consensus: Sui Lutris is the first smart-contract platform to sustainably achieve sub-second finality. It achieves this significant decrease by employing consensusless agreement not only for simple payments but for a large variety of transactions. Unlike prior work, Sui Lutris neither compromises expressiveness nor throughput and can run perpetually without restarts. Sui Lutris achieves this by safely integrating consensuless agreement with a high-throughput consensus protocol that is invoked out of the critical finality path but ensures that when a transaction is at risk of inconsistent concurrent accesses, its settlement is delayed until the total ordering is resolved. Building such a hybrid architecture is especially delicate during reconfiguration events, where the system needs to preserve the safety of the consensusless path without compromising the long-term liveness of potentially misconfigured clients. We thus develop a novel reconfiguration protocol, the first to provably show the safe and efficient reconfiguration of a consensusless blockchain. Sui Lutris is currently running in production and underpins the Sui smart-contract platform. Combined with the use of Objects instead of accounts it enables the safe execution of smart contracts that expose objects as a first-class resource. In our experiments Sui Lutris achieves latency lower than 0.5 seconds for throughput up to 5,000 certificates per second (150k ops/s with transaction blocks), compared to the state-of-the-art real-world consensus latencies of 3 seconds. Furthermore, it gracefully handles validators crash-recovery and does not suffer visible performance degradation during reconfiguration. <|reference_end|>"
] | [
10,
18,
20,
48
] | {"<|multi_cite_1_1|>": "ss-1353654", "<|multi_cite_1_2|>": "arxiv-467857", "<|multi_cite_2_2|>": "ss-1248685", "<|multi_cite_2_3|>": "ss-727453", "<|multi_cite_4_1|>": "ss-722251", "<|multi_cite_4_2|>": "arxiv-392715", "<|multi_cite_4_3|>": "arxiv-343100", "<|multi_cite_4_4|>": "arxiv-533080", "<|multi_cite_4_5|>": "ss-2417214", "<|multi_cite_4_6|>": "arxiv-210813", "<|multi_cite_5_1|>": "arxiv-116595", "<|multi_cite_5_2|>": "arxiv-553250", "<|cite_6|>": "arxiv-553250", "<|cite_7|>": "ss-1214886", "<|cite_8|>": "arxiv-553250", "<|cite_10|>": "ss-1248685", "<|cite_11|>": "ss-727453", "<|cite_13|>": "ss-1283300", "<|cite_14|>": "ss-1990728", "<|multi_cite_15_1|>": "arxiv-292736", "<|multi_cite_15_2|>": "ss-1353655", "<|multi_cite_15_3|>": "arxiv-345471", "<|multi_cite_15_4|>": "ss-1353656", "<|multi_cite_16_1|>": "ss-1859536", "<|multi_cite_16_2|>": "arxiv-467857", "<|cite_17|>": "ss-762315", "<|multi_cite_18_1|>": "arxiv-585453", "<|multi_cite_18_2|>": "arxiv-450150", "<|multi_cite_18_3|>": "ss-1353657", "<|cite_19|>": "arxiv-450213", "<|cite_20|>": "ss-1353658", "<|cite_23|>": "ss-846312", "<|multi_cite_24_1|>": "ss-722251", "<|multi_cite_24_2|>": "ss-1353659", "<|multi_cite_24_3|>": "ss-1353660", "<|multi_cite_24_4|>": "arxiv-392715", "<|multi_cite_24_5|>": "arxiv-343100", "<|multi_cite_25_1|>": "ss-2417214", "<|multi_cite_25_2|>": "arxiv-210813", "<|multi_cite_26_1|>": "ss-1123611", "<|multi_cite_26_2|>": "ss-1106461", "<|multi_cite_26_3|>": "ss-1119322", "<|multi_cite_27_1|>": "ss-1230263", "<|multi_cite_29_1|>": "arxiv-116820", "<|multi_cite_29_2|>": "arxiv-116595", "<|multi_cite_29_3|>": "arxiv-513958", "<|multi_cite_29_4|>": "ss-1663061", "<|multi_cite_29_5|>": "arxiv-391996", "<|multi_cite_29_6|>": "arxiv-553250"} |
2407.13477 | <|paper_start|> Title: The Construction of a Soft Gripper Based on Magnetorheological Elastomer with Permanent Magnet
Abstract: The Construction of a Soft Gripper Based on Magnetorheological Elastomer with Permanent Magnet: Recently, magnetorheological elastomers have become an interesting smart material with many new designs for robotics. A variety of applications have been built with magnetorheological elastomers, such as vibration absorbers, actuators, or grippers, showing that this material is promising for soft robotics. In this work, the novel concept of a gripper is proposed, exploring the features of a magnetorheological elastomer and permanent magnet. The gripper uses the energy of a permanent magnet to provide a self-closing gripping mechanism. The usage of flexible material enables one to hold delicate objects of various shapes. This paper presents the rolling effect of magnetorheological elastomer and permanent magnet, the design process, and the features of the soft gripper. The effectiveness of the soft gripper was validated in a series of experiments that involved lifting different objects.
Introduction
In recent times, soft magnetic materials have become more important in robotics and control systems applications <|cite_start|> (Reference: Design and development of a soft robotic gripper for manipulation in minimally invasive surgery: a proof of concept: ) <|cite_end|> <|cite_start|> (Reference: Actuating Soft Matter with Magnetic Torque: Here, recent significant developments are reviewed in manipulating soft matter systems through the use of magnetic torque. Magnetic torque enables the orientation, assembly, and manipulation of thermally fluctuating systems in broad material fields including biomaterials, ceramic and composite precursor suspensions, polymer solutions, fluids, foams, and gels. Magnetism offers an effective, safe, and massively parallel manufacturing approach. By exploiting magnetic torque, leading soft matter researchers have demonstrated new technologies in rheology, life sciences, optics, and structural materials. Specifically, magnetic torque has been used to assemble particle suspensions, to fabricate and actuate composite materials, and to control and manipulate biological materials. In each of these applications, there are energetic limitations to magnetic torque that need to be understood and characterized. However, magnetic torque offers a promising remote‐controlled approach to creating and enabling new soft matter technologies.) <|cite_end|> <|cite_start|> (Reference: A Magnetically-Actuated Untethered Jellyfish-Inspired Soft Milliswimmer: Untethered small-scale soft robots can potentially be used in healthcare and biomedical applications. They can access small spaces and reshape their bodies in a programmable manner to adapt to unstructured environments and have diverse dynamic behaviors. However, the functionalities of current miniature soft robots are limited, restricting their applications in medical procedures. Taking the advantage of the shape-programmable ability of magnetic soft composite materials, here we propose an untethered soft millirobot (jellyfishbot) that can swim like a jellyfish by timeand trajectory-asymmetric up and down beating of its lappets. Its swimming speed and direction can be controlled by tuning the magnitude, frequency, and direction of the external oscillating magnetic field. We demonstrate that such jellyfishbot can perform several tasks that could be useful towards medical applications, such as delivering drugs, clogging a narrow tube or vessel, and patching a target area under ultrasound imagingbased guiding. The millirobot presented in this paper could be used inside organs filled with fluids completely, such as a bladder or inflated stomach. KeywordsBio-inspired; soft robot; miniature robot; jellyfish) <|cite_end|> <|cite_start|> (Reference: Design, manufacturing and applications of small-scale magnetic soft robots: ) <|cite_end|>. The classic usage of soft magnetic materials is mainly related to vibration control <|cite_start|> (Reference: A state-of-the-art review on magnetorheological elastomer devices: During the last few decades, magnetorheological (MR) elastomers have attracted a significant amount of attention for their enormous potential in engineering applications. Because they are a solid counterpart to MR fluids, MR elastomers exhibit a unique field-dependent material property when exposed to a magnetic field, and they overcome major issues faced in magnetorheological fluids, e.g. the deposition of iron particles, sealing problems and environmental contamination. Such advantages offer great potential for designing intelligent devices to be used in various engineering fields, especially in fields that involve vibration reduction and isolation. This paper presents a state of the art review on the recent progress of MR elastomer technology, with special emphasis on the research and development of MR elastomer devices and their applications. To keep the integrity of the knowledge, this review includes a brief introduction of MR elastomer materials and follows with a discussion of critical issues involved in designing magnetorheological elastomer devices, i.e. operation modes, coil placements and principle fundamentals. A comprehensive review has been presented on the research and development of MR elastomer devices, including vibration absorbers, vibration isolators, base isolators, sensing devices, and so on. A summary of the research on the modeling mechanical behavior for both the material and the devices is presented. Finally, the challenges and the potential facing magnetorheological elastomer technology are discussed, and suggestions have been made based on the authors’ knowledge and experience.) <|cite_end|> <|cite_start|> (Reference: A new vibration isolator integrating tunable stiffness-damping and active driving properties based on radial-chains magnetorheological elastomer: ) <|cite_end|>. Its flexibility and ability to change the mechanical properties under the control of a magnetic field allow the construction of vibration isolators and absorbers in commercial applications. However, recently, these materials became more interesting for robotic applications <|cite_start|> (Reference: Actuating Soft Matter with Magnetic Torque: Here, recent significant developments are reviewed in manipulating soft matter systems through the use of magnetic torque. Magnetic torque enables the orientation, assembly, and manipulation of thermally fluctuating systems in broad material fields including biomaterials, ceramic and composite precursor suspensions, polymer solutions, fluids, foams, and gels. Magnetism offers an effective, safe, and massively parallel manufacturing approach. By exploiting magnetic torque, leading soft matter researchers have demonstrated new technologies in rheology, life sciences, optics, and structural materials. Specifically, magnetic torque has been used to assemble particle suspensions, to fabricate and actuate composite materials, and to control and manipulate biological materials. In each of these applications, there are energetic limitations to magnetic torque that need to be understood and characterized. However, magnetic torque offers a promising remote‐controlled approach to creating and enabling new soft matter technologies.) <|cite_end|> <|cite_start|> (Reference: A review of magnetic elastomers and their role in soft robotics: Soft robotics as a field of study incorporates different mechanisms, control schemes, as well as multifunctional materials to realize robots able to perform tasks inaccessible to traditional rigid robots. Conventional methods for controlling soft robots include pneumatic or hydraulic pressure sources, and some more recent methods involve temperature and voltage control to enact shape change. Magnetism was more recently introduced as a building block for soft robotic design and control, with recent publications incorporating magnetorheological fluids and magnetic particles in elastomers, to realize some of the same objectives present in more traditional soft robotics research. This review attempts to organize and emphasize the existing work with magnetism and soft robotics, specifically studies on magnetic elastomers, while highlighting potential avenues for further research enabled by these advances.) <|cite_end|> <|cite_start|> (Reference: Magnetorheological elastomers—An underestimated class of soft actuator materials: In this paper, the results of various investigations on the viscoelastic and magnetic properties of magnetorheological elastomers (MRE) in magnetic fields of variable strength, are reported. These characteristics have a strong influence on the behavior of MRE in various applications such as vibration damping and tunable vibration absorbers. Moreover, the actuation capabilities of MRE with different kinds of deformation in a magnetic field are considered. The degree of deformation depends on the magnetic field strength and its gradient and can reach about 10%. When removing the magnetic field, the MRE body relaxes back to its initial shape. MRE materials can be used for linear actuators, where the MRE body is deformed due to the attraction by a magnetic circuit acting from one side. Such linear actuators may be applied for haptic feedback and pumps. However, ring-shaped MRE bodies can also deform radially around their cylindrical axis, if the magnetic field is oriented correspondingly. This unusual type of deformation allows the realization of a proportional valve, whose opening is controlled by the magnetic field strength. Similar configurations can be used for controllable seals, for locking devices and even for inchworm drives. Various versions of this actuation principle are discussed in the paper.) <|cite_end|>, including the construction of haptic devices, soft grippers, and remote control of the robot by an external magnetic field <|cite_start|> (Reference: 5-DOF Manipulation of an Untethered Magnetic Device in Fluid using a Single Permanent Magnet: —This paper presents a three degree-of-freedom (3-DOF) closed-loop position and 2-DOF open-loop orientation control method for an untethered mockup magnetic capsule endoscope in fluid with a single permanent magnet positioned by a commercial robotic manipulator and a 3-DOF capsule-position localization system. Using traditional methods known to roboticists, we study the kinematics of untethered magnetic manipulation using a single permanent magnet as the end-effector of a robot manipulator. We present a control method that maintains 5-DOF control of a magnetic capsule when the robot manipulator is not near a kinematic singularity, and seamlessly enables a capsule’s position to be controlled when the manipulator nears a kinematic singularity by sacrificing control over the capsule’s orientation. We demonstrate the method’s robustness to a control rate of 25Hz, reduced localization rates down to 30Hz, and the presence of manipulator singularities. 5-DOF manipulation of an untethered device has been previously demonstrated by electromagnetic systems only. This work has applications for robotic capsule endoscopy of a fluid-distended stomach.) <|cite_end|> <|cite_start|> (Reference: The 2017 IEEE International Conference on Robotics and Automation (ICRA): ) <|cite_end|> <|cite_start|> (Reference: Learning of Sub-optimal Gait Controllers for Magnetic Walking Soft Millirobots: Untethered small-scale soft robots have promising applications in minimally invasive surgery, targeted drug delivery, and bioengineering applications as they can access confined spaces in the human body. However, due to highly nonlinear soft continuum deformation kinematics, inherent stochastic variability during fabrication at the small scale, and lack of accurate models, the conventional control methods cannot be easily applied. Adaptivity of robot control is additionally crucial for medical operations, as operation environments show large variability, and robot materials may degrade or change over time, which would have deteriorating effects on the robot motion and task performance. Therefore, we propose using a probabilistic learning approach for millimeter-scale magnetic walking soft robots using Bayesian optimization (BO) and Gaussian processes (GPs). Our approach provides a data-efficient learning scheme to find controller parameters while optimizing the stride length performance of the walking soft millirobot robot within a small number of physical experiments. We demonstrate adaptation to fabrication variabilities in three different robots and to walking surfaces with different roughness. We also show an improvement in the learning performance by transferring the learning results of one robot to the others as prior information.) <|cite_end|>.
In the literature, two kinds of magnetoactive material have been described. The first is a magnetorheological elastomer (MRE), an elastomer mixed with ferromagnetic particles. These materials' preparation process and properties can be found in detail in the work <|cite_start|> (Reference: A state-of-the-art review on magnetorheological elastomer devices: During the last few decades, magnetorheological (MR) elastomers have attracted a significant amount of attention for their enormous potential in engineering applications. Because they are a solid counterpart to MR fluids, MR elastomers exhibit a unique field-dependent material property when exposed to a magnetic field, and they overcome major issues faced in magnetorheological fluids, e.g. the deposition of iron particles, sealing problems and environmental contamination. Such advantages offer great potential for designing intelligent devices to be used in various engineering fields, especially in fields that involve vibration reduction and isolation. This paper presents a state of the art review on the recent progress of MR elastomer technology, with special emphasis on the research and development of MR elastomer devices and their applications. To keep the integrity of the knowledge, this review includes a brief introduction of MR elastomer materials and follows with a discussion of critical issues involved in designing magnetorheological elastomer devices, i.e. operation modes, coil placements and principle fundamentals. A comprehensive review has been presented on the research and development of MR elastomer devices, including vibration absorbers, vibration isolators, base isolators, sensing devices, and so on. A summary of the research on the modeling mechanical behavior for both the material and the devices is presented. Finally, the challenges and the potential facing magnetorheological elastomer technology are discussed, and suggestions have been made based on the authors’ knowledge and experience.) <|cite_end|> <|cite_start|> (Reference: A review of magnetic elastomers and their role in soft robotics: Soft robotics as a field of study incorporates different mechanisms, control schemes, as well as multifunctional materials to realize robots able to perform tasks inaccessible to traditional rigid robots. Conventional methods for controlling soft robots include pneumatic or hydraulic pressure sources, and some more recent methods involve temperature and voltage control to enact shape change. Magnetism was more recently introduced as a building block for soft robotic design and control, with recent publications incorporating magnetorheological fluids and magnetic particles in elastomers, to realize some of the same objectives present in more traditional soft robotics research. This review attempts to organize and emphasize the existing work with magnetism and soft robotics, specifically studies on magnetic elastomers, while highlighting potential avenues for further research enabled by these advances.) <|cite_end|> <|cite_start|> (Reference: Magnetorheological elastomers—An underestimated class of soft actuator materials: In this paper, the results of various investigations on the viscoelastic and magnetic properties of magnetorheological elastomers (MRE) in magnetic fields of variable strength, are reported. These characteristics have a strong influence on the behavior of MRE in various applications such as vibration damping and tunable vibration absorbers. Moreover, the actuation capabilities of MRE with different kinds of deformation in a magnetic field are considered. The degree of deformation depends on the magnetic field strength and its gradient and can reach about 10%. When removing the magnetic field, the MRE body relaxes back to its initial shape. MRE materials can be used for linear actuators, where the MRE body is deformed due to the attraction by a magnetic circuit acting from one side. Such linear actuators may be applied for haptic feedback and pumps. However, ring-shaped MRE bodies can also deform radially around their cylindrical axis, if the magnetic field is oriented correspondingly. This unusual type of deformation allows the realization of a proportional valve, whose opening is controlled by the magnetic field strength. Similar configurations can be used for controllable seals, for locking devices and even for inchworm drives. Various versions of this actuation principle are discussed in the paper.) <|cite_end|> <|cite_start|> (Reference: Design, Fabrication and Analysis of Magnetorheological Soft Gripper: The magnetorheological elastomer is promising material for applications in soft robotics. Its properties like reactive to external magnetic field and softness allow to construct an attractive devices. This work presents a construction of soft gripper assembled with magnetorheological elastomers. The work describes the detailed molding process of magnetorheological elastomers. Further, the electromechanical properties of magnetorheological elastomers are shown using a simple beam. Finally, the soft gripper is constructed and analyzed with the series of experiments.) <|cite_end|>. The main properties of MRE are flexibility and reaction to the magnetic field due to a relative permeability of about 2-5. The cost of MRE material flexibility is lower permeability in comparison to rigid materials such as steel. The second magnetoactive material is an elastomer mixed with a hard magnetic material. It is also flexible, but it has a magnetic field. The fabrication process of this material can be seen in the work <|cite_start|> (Reference: Actuating Soft Matter with Magnetic Torque: Here, recent significant developments are reviewed in manipulating soft matter systems through the use of magnetic torque. Magnetic torque enables the orientation, assembly, and manipulation of thermally fluctuating systems in broad material fields including biomaterials, ceramic and composite precursor suspensions, polymer solutions, fluids, foams, and gels. Magnetism offers an effective, safe, and massively parallel manufacturing approach. By exploiting magnetic torque, leading soft matter researchers have demonstrated new technologies in rheology, life sciences, optics, and structural materials. Specifically, magnetic torque has been used to assemble particle suspensions, to fabricate and actuate composite materials, and to control and manipulate biological materials. In each of these applications, there are energetic limitations to magnetic torque that need to be understood and characterized. However, magnetic torque offers a promising remote‐controlled approach to creating and enabling new soft matter technologies.) <|cite_end|>. In this work, we focus on the first type of magnetorheological elastomer in robotics applications. The main problem with MRE material in robotic applications is its low permeability, which limits the design possibilities. As previous studies show, MRE requires strong magnetic fields <|cite_start|> (Reference: Magnetorheological elastomers—An underestimated class of soft actuator materials: In this paper, the results of various investigations on the viscoelastic and magnetic properties of magnetorheological elastomers (MRE) in magnetic fields of variable strength, are reported. These characteristics have a strong influence on the behavior of MRE in various applications such as vibration damping and tunable vibration absorbers. Moreover, the actuation capabilities of MRE with different kinds of deformation in a magnetic field are considered. The degree of deformation depends on the magnetic field strength and its gradient and can reach about 10%. When removing the magnetic field, the MRE body relaxes back to its initial shape. MRE materials can be used for linear actuators, where the MRE body is deformed due to the attraction by a magnetic circuit acting from one side. Such linear actuators may be applied for haptic feedback and pumps. However, ring-shaped MRE bodies can also deform radially around their cylindrical axis, if the magnetic field is oriented correspondingly. This unusual type of deformation allows the realization of a proportional valve, whose opening is controlled by the magnetic field strength. Similar configurations can be used for controllable seals, for locking devices and even for inchworm drives. Various versions of this actuation principle are discussed in the paper.) <|cite_end|> <|cite_start|> (Reference: Exploring the potential of magnetorheology in robotic grippers: ) <|cite_end|> to be controlled or to produce large deformations. In the work <|cite_start|> (Reference: Exploring the potential of magnetorheology in robotic grippers: ) <|cite_end|> it is suggested that MRE requires further improvement of material parameters before being used in robotic applications. However, in our work, we show that MRE is already applicable to creating soft grippers.
In this work, we are particularly interested in the construction of the soft gripper, which in recent times has been strongly developed by researchers <|cite_start|> (Reference: Design, fabrication and control of soft robots: ) <|cite_end|> <|cite_start|> (Reference: Design and development of a soft robotic gripper for manipulation in minimally invasive surgery: a proof of concept: ) <|cite_end|> <|cite_start|> (Reference: Soft Grippers for Automatic Crop Harvesting: A Review: Agriculture 4.0 is transforming farming livelihoods thanks to the development and adoption of technologies such as artificial intelligence, the Internet of Things and robotics, traditionally used in other productive sectors. Soft robotics and soft grippers in particular are promising approaches to lead to new solutions in this field due to the need to meet hygiene and manipulation requirements in unstructured environments and in operation with delicate products. This review aims to provide an in-depth look at soft end-effectors for agricultural applications, with a special emphasis on robotic harvesting. To that end, the current state of automatic picking tasks for several crops is analysed, identifying which of them lack automatic solutions, and which methods are commonly used based on the botanical characteristics of the fruits. The latest advances in the design and implementation of soft grippers are also presented and discussed, studying the properties of their materials, their manufacturing processes, the gripping technologies and the proposed control methods. Finally, the challenges that have to be overcome to boost its definitive implementation in the real world are highlighted. Therefore, this review intends to serve as a guide for those researchers working in the field of soft robotics for Agriculture 4.0, and more specifically, in the design of soft grippers for fruit harvesting robots.) <|cite_end|> <|cite_start|> (Reference: Soft robotic grippers: Advances in soft robotics, materials science, and stretchable electronics have enabled rapid progress in soft grippers. Here, a critical overview of soft robotic grippers is presented, covering different material sets, physical principles, and device architectures. Soft gripping can be categorized into three technologies, enabling grasping by: a) actuation, b) controlled stiffness, and c) controlled adhesion. A comprehensive review of each type is presented. Compared to rigid grippers, end‐effectors fabricated from flexible and soft components can often grasp or manipulate a larger variety of objects. Such grippers are an example of morphological computation, where control complexity is greatly reduced by material softness and mechanical compliance. Advanced materials and soft components, in particular silicone elastomers, shape memory materials, and active polymers and gels, are increasingly investigated for the design of lighter, simpler, and more universal grippers, using the inherent functionality of the materials. Embedding stretchable distributed sensors in or on soft grippers greatly enhances the ways in which the grippers interact with objects. Challenges for soft grippers include miniaturization, robustness, speed, integration of sensing, and control. Improved materials, processing methods, and sensing play an important role in future research.) <|cite_end|> <|cite_start|> (Reference: Review of soft fluidic actuators: classification and materials modeling analysis: Soft actuators can be classified into five categories: tendon-driven actuators, electroactive polymers, shape-memory materials, soft fluidic actuators (SFAs), and hybrid actuators. The characteristics and potential challenges of each class are explained at the beginning of this review. Furthermore, recent advances especially focusing on SFAs are illustrated. There are already some impressive SFA designs to be found in the literature, constituting a fundamental basis for design and inspiration. The goal of this review is to address the latest innovative designs for SFAs and their challenges and improvements with respect to previous generations, and to help researchers to select appropriate materials for their application. We suggest seven influential designs: pneumatic artificial muscle, PneuNet, continuum arm, universal granular gripper, origami soft structure, vacuum-actuated muscle-inspired pneumatic, and hydraulically amplified self-healing electrostatic. The hybrid design of SFAs for improved functionality and shape controllability is also considered. Modeling SFAs, based on previous research, can be classified into three main groups: analytical methods, numerical methods, and model-free methods. We demonstrate the latest advances and potential challenges in each category. Regarding the fact that the performance of soft actuators is dependent on material selection, we then focus on the behaviors and mechanical properties of the various types of silicone that can be found in the SFA literature. For a better comparison of the different constitutive models of silicone materials proposed and tested in the literature, ABAQUS software is here employed to generate the engineering and true strain-stress data from the constitutive models, and compare them with standard uniaxial tensile test data based on ASTM412. Although the figures presented show that in a small range of stress–strain data, most of these models can predict the material model acceptably, few of them predict it accurately for large strain-stress values. Sensor technology integrated into SFAs is also being developed, and has the potential to increase controllability and observability by detecting a wide variety of data such as curvature, tactile contacts, produced force, and pressure values.) <|cite_end|>. One of the applications in which soft grippers can be used in the food industry <|cite_start|> (Reference: Design of a magnetorheological robot gripper for handling of delicate food products with varying shapes: ) <|cite_end|> <|cite_start|> (Reference: Performa of SCARA based intelligent 3 axis robotic soft gripper for enhanced material handling: ) <|cite_end|>, because of the possibility to grasp and transport delicate products such as fruits or candies without damaging them. In general, soft grippers can be divided by the actuation principle. The most popular soft grippers are pneumatic and mechanical actuation. The primary role of soft pneumatic grippers represents PneuNet <|cite_start|> (Reference: Pneumatic Networks for Soft Robotics that Actuate Rapidly: Soft robots actuated by inflation of a pneumatic network (a “pneu‐net”) of small channels in elastomeric materials are appealing for producing sophisticated motions with simple controls. Although current designs of pneu‐nets achieve motion with large amplitudes, they do so relatively slowly (over seconds). This paper describes a new design for pneu‐nets that reduces the amount of gas needed for inflation of the pneu‐net, and thus increases its speed of actuation. A simple actuator can bend from a linear to a quasi‐circular shape in 50 ms when pressurized at ΔP = 345 kPa. At high rates of pressurization, the path along which the actuator bends depends on this rate. When inflated fully, the chambers of this new design experience only one‐tenth the change in volume of that required for the previous design. This small change in volume requires comparably low levels of strain in the material at maximum amplitudes of actuation, and commensurately low rates of fatigue and failure. This actuator can operate over a million cycles without significant degradation of performance. This design for soft robotic actuators combines high rates of actuation with high reliability of the actuator, and opens new areas of application for them.) <|cite_end|> or particle jamming <|cite_start|> (Reference: Design, fabrication and control of soft robots: ) <|cite_end|> <|cite_start|> (Reference: IEEE/RSJ International Conference on Intelligent Robots and Systems: The 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) will be held on October 23–27, 2022 in The Kyoto International Conference Center, Kyoto, Japan. The IROS is one of the largest and most impacting robotics research conferences worldwide. It provides an international forum for the international robotics research community to explore the frontier of science and technology in intelligent robots and smart machines. The theme of IROS 2022 is “Embodied AI for a Symbiotic Society’’. In addition to technical sessions and multi-media presentations, IROS conferences also hold panel discussions, forums, workshops, tutorials, exhibits, and technical tours to enrich the fruitful discussions among conference attendees.) <|cite_end|>. They are controlled by variable pressure and therefore must be supplied with compressed air. On the other hand, the soft gripper's fingers are moved by a cable-based mechanism <|cite_start|> (Reference: Design and development of a soft robotic gripper for manipulation in minimally invasive surgery: a proof of concept: ) <|cite_end|>, which represents a mechanical type. The soft gripper with pneumatic and mechanics actuation has good properties, however, they require a completed control system like an air compressor. Therefore, grippers based on actuation principles different from pneumatic or mechanical ones also play an important role. For example, in works <|cite_start|> (Reference: Rollable Multisegment Dielectric Elastomer Minimum Energy Structures for a Deployable Microsatellite Gripper: Debris in space presents an ever-increasing problem for spacecraft in Earth orbit. As a step in the mitigation of this issue, the CleanSpace One (CSO) microsatellite has been proposed. Its mission is to perform active debris removal of a decommissioned nanosatellite (the CubeSat SwissCube). An important aspect of this project is the development of the gripper system that will entrap the capture target. We present the development of rollable dielectric elastomer minimum energy structures (DEMES) as the main component of CSO's deployable gripper. DEMES consist of a prestretched dielectric elastomer actuator membrane bonded to a flexible frame. The actuator finds equilibrium in bending when the prestretch is released and the bending angle can be changed by the application of a voltage bias. The inherent flexibility and lightweight nature of the DEMES enables the gripper to be stored in a rolled-up state prior to deployment. We fabricated proof-of-concept actuators of three different geometries using a robust and repeatable fabrication methodology. The resulting actuators were mechanically resilient to external deformation, and display conformability to objects of varying shapes and sizes. Actuator mass is less than 0.65 g and all the actuators presented survived the rolling-up and subsequent deployment process. Our devices demonstrate a maximum change of bending angle of more than 60° and a maximum gripping (reaction) force of 2.2 mN for a single actuator.) <|cite_end|> <|cite_start|> (Reference: 2015 IEEE International Conference on Intelligent Robots and Systems (IROS): This paper introduces a novel way to improve the settling time of transitions between different walking controllers. This improvement is achieved by commanding a sequence of intermediate transitions to the target controller. As a result, the state of the system enters the domain of attraction of the target controller closer to the fixed point of the Poincar´ e Map. The method is applicable to any walking robot with one degree of underactuation. The problem is expressed as a Markov Decision Process and then solved with Reinforcement Learning. In order to simplify the stability analysis of underactuated walking the Hybrid Zero Dynamics framework is utilized. Another advantage of using the Hybrid Zero Dynamics is the dimensionality reduction of the state representation in the Markov Decision Process. The experimental results suggest that the proposed methodology performs better than a onestep transition for 84.34% of all the considered transitions for a simulated walking robot matching the parameters of RABBIT [1].) <|cite_end|> the dielectric electroactive polymers are applied to build grippers.
An interesting alternative to building soft grippers is the application of magnetoactive materials with soft or hard magnetic particles. In the literature <|cite_start|> (Reference: EPM–MRE: Electropermanent Magnet–Magnetorheological Elastomer for Soft Actuation System and Its Application to Robotic Grasping: Conventional soft robotic grippers have been developed based on soft pneumatic actuators or using smart soft materials, but they need external pressure sources, are easily deteriorated or used in millimetric-scale, which leads to increase of the independency, instability, and vulnerability. In this study, we propose an EPM–MRE (Electropermanent Magnet–Magnetorheological elastomer) actuation system, which strengthens the independency and stability of soft actuation by non-contact magnetic force. We established its fundamental principle and investigated parameter design through developing a suction cup as a robotic gripper. We prototyped an EPM-MRE actuated suction cup based on magnetic-charge and tensile modeling as well as optimized the structure of EPM by introducing axisymmetric EPM with frustum pole and suction cup by introducing bi-silicone structure to uniformly activate MRE membrane. The activation of EPM by different current pulses and suction force affected by contact shapes and air gaps were tested. Evaluations showed that the suction cup could be activated with 10 ms and generate a maximum suction force of 9.2 N at a steady state with 0 J energy consumption. In conclusion, the EPM-MRE soft actuation system can be used as a robotic gripper.) <|cite_end|> <|cite_start|> (Reference: 2019 2nd IEEE International Conference on Soft Robotics (RoboSoft): ) <|cite_end|> <|cite_start|> (Reference: Beyond Human Hand: Shape-Adaptive and Reversible Magnetorheological Elastomer-Based Robot Gripper Skin: Developing a simple and universal solution for gripping fragile, multiscaled, and arbitrary-shaped objects using a robot gripper is challenging. Herein, we propose a universal, shape-adaptive/retaining and reversible, hardness-variable gripper skin that serves as a resourceful solution for grasping such objects without damaging them. The proposed universal gripper skin based on a magnetorheological elastomer is attached to a robot gripper. The proposed skin takes the shape of a target object as soon as the gripper grasps the object. At this time, we solidify the gripper skin by applying a magnetic field, thereby allowing the gripper to grasp the target object easily. After releasing the objects, the magnetic field is removed and the deformed proposed gripper skin rapidly restores its original shape. The proposed adaptive gripper skin is made to grasp various target objects, such as cylinders, cuboids, and triangular prisms, based on which its grasping performance is evaluated.) <|cite_end|> <|cite_start|> (Reference: Design, Fabrication and Analysis of Magnetorheological Soft Gripper: The magnetorheological elastomer is promising material for applications in soft robotics. Its properties like reactive to external magnetic field and softness allow to construct an attractive devices. This work presents a construction of soft gripper assembled with magnetorheological elastomers. The work describes the detailed molding process of magnetorheological elastomers. Further, the electromechanical properties of magnetorheological elastomers are shown using a simple beam. Finally, the soft gripper is constructed and analyzed with the series of experiments.) <|cite_end|> <|cite_start|> (Reference: DIW 3D printing of hybrid magnetorheological materials for application in soft robotic grippers: ) <|cite_end|>, there exist some preliminary studies on how to build a gripper using MRE material (with soft magnetic particles). In the work <|cite_start|> (Reference: 2019 2nd IEEE International Conference on Soft Robotics (RoboSoft): ) <|cite_end|> the MRE fingers, which are based on a rigid skeleton, are controlled by varying magnetic fields. The gripper can hold varying objects and requires energy in the closed state. In work <|cite_start|> (Reference: EPM–MRE: Electropermanent Magnet–Magnetorheological Elastomer for Soft Actuation System and Its Application to Robotic Grasping: Conventional soft robotic grippers have been developed based on soft pneumatic actuators or using smart soft materials, but they need external pressure sources, are easily deteriorated or used in millimetric-scale, which leads to increase of the independency, instability, and vulnerability. In this study, we propose an EPM–MRE (Electropermanent Magnet–Magnetorheological elastomer) actuation system, which strengthens the independency and stability of soft actuation by non-contact magnetic force. We established its fundamental principle and investigated parameter design through developing a suction cup as a robotic gripper. We prototyped an EPM-MRE actuated suction cup based on magnetic-charge and tensile modeling as well as optimized the structure of EPM by introducing axisymmetric EPM with frustum pole and suction cup by introducing bi-silicone structure to uniformly activate MRE membrane. The activation of EPM by different current pulses and suction force affected by contact shapes and air gaps were tested. Evaluations showed that the suction cup could be activated with 10 ms and generate a maximum suction force of 9.2 N at a steady state with 0 J energy consumption. In conclusion, the EPM-MRE soft actuation system can be used as a robotic gripper.) <|cite_end|>, the soft gripper is performed on an electromagnet and MRE membrane to create a suction cup. The studies <|cite_start|> (Reference: Beyond Human Hand: Shape-Adaptive and Reversible Magnetorheological Elastomer-Based Robot Gripper Skin: Developing a simple and universal solution for gripping fragile, multiscaled, and arbitrary-shaped objects using a robot gripper is challenging. Herein, we propose a universal, shape-adaptive/retaining and reversible, hardness-variable gripper skin that serves as a resourceful solution for grasping such objects without damaging them. The proposed universal gripper skin based on a magnetorheological elastomer is attached to a robot gripper. The proposed skin takes the shape of a target object as soon as the gripper grasps the object. At this time, we solidify the gripper skin by applying a magnetic field, thereby allowing the gripper to grasp the target object easily. After releasing the objects, the magnetic field is removed and the deformed proposed gripper skin rapidly restores its original shape. The proposed adaptive gripper skin is made to grasp various target objects, such as cylinders, cuboids, and triangular prisms, based on which its grasping performance is evaluated.) <|cite_end|> show a mechanical gripper where the adaptive sking is built with MRE material. The gripper can adapt to different object shapes hence it can hold objects more delicate and precise. In the work <|cite_start|> (Reference: Design, Fabrication and Analysis of Magnetorheological Soft Gripper: The magnetorheological elastomer is promising material for applications in soft robotics. Its properties like reactive to external magnetic field and softness allow to construct an attractive devices. This work presents a construction of soft gripper assembled with magnetorheological elastomers. The work describes the detailed molding process of magnetorheological elastomers. Further, the electromechanical properties of magnetorheological elastomers are shown using a simple beam. Finally, the soft gripper is constructed and analyzed with the series of experiments.) <|cite_end|> <|cite_start|> (Reference: DIW 3D printing of hybrid magnetorheological materials for application in soft robotic grippers: ) <|cite_end|> the initial concept of the MRE gripper with fingers was designed using a magnetorheological material controlled by an electromagnet. This gripper holds only lightweight objects and its gripping force is created only by stress caused by deformation from the held object. Finally, MRE can also be an auxiliary material for building a gripper like in works <|cite_start|> (Reference: Gripping characteristics of an electromagnetically activated magnetorheological fluid-based gripper: The design and test of a magnetorheological fluid (MRF)-based universal gripper (MR gripper) are presented in this study. The MR gripper was developed to have a simple design, but with the ability to produce reliable gripping and handling of a wide range of simple objects. The MR gripper design consists of a bladder mounted atop an electromagnet, where the bladder is filled with an MRF, which was formulated to have long-term stable sedimentation stability, that was synthesized using a high viscosity linear polysiloxane (HVLP) carrier fluid with a carbonyl iron particle (CIP) volume fraction of 35%. Two bladders were fabricated: a magnetizable bladder using a magnetorheological elastomer (MRE), and a passive (non-magnetizable) silicone rubber bladder. The holding force and applied (initial compression) force of the MR gripper for a bladder fill volume of 75% were experimentally measured, for both magnetizable and passive bladders, using a servohydraulic material testing machine for a range of objects. The gripping performance of the MR gripper using an MRE bladder was compared to that of the MR gripper using a passive bladder.) <|cite_end|> <|cite_start|> (Reference: Holding Performance of an Adaptive Magnetorheological Fluid-Based Robotic Claw: This study addresses the holding performance of an adaptive magnetorheological fluid (MRF)-based robotic claw that can grasp a wide range of objects satisfying various grasping task requirements. To this end, a two-finger type of MRF-based robotic claw was proposed in this study. Two magnetorheological (MR) grippers with MR elastomer (MRE) bladders were mounted at the end of each finger. A target object was placed between these two MR grippers and was grasped by manually adjusting the distance between these two fingers. This adjustment of the distance between the two fingers results in a change in the normal force applied to the object. The holding forces of the MRF-based robotic claw with respect to both applied normal forces and magnetic field strengths were experimentally measured using an Instron material testing machine. From the measured holding forces, the dynamic and static holding forces, controllable holding forces, and holding coefficients were determined for the evaluation of the holding performances of the MRF-based robotic claw. The feasibility of the MRF-based robotic claw was experimentally confirmed.) <|cite_end|>, where the MRE creates a bladder for magnetorheological fluid. Another application of MRE to grippers as auxiliary material is its connection to shape memory alloy (SMA) presented in <|cite_start|> (Reference: Equipping new sma artificial muscles with controllable mrf exoskeletons for robotic manipulators and grippers: Shape memory alloy (SMA) wires are one of the widely used materials for soft artificial muscles. However, SMA artificial muscles have two problems, including limited load holding ability due to their soft nature and slow response due to their long cooling time. The main contributions of this article are the developments of a controllable magnetorheological fluid (MRF) exoskeleton and a fast-response magnetorheological elastomer-SMA artificial muscle as effective approaches to solve the above-mentioned problems. The controllable MRF exoskeleton provides variable stiffness so that it can be flexible enough to allow the manipulator to bend as required while stiff enough to hold up heavy loads. This new artificial muscle accelerates the cooling speed of SMA wire to shorten its recovery time. Our tests proved that this new artificial muscle, compared with conventional SMA artificial muscles, could improve the recovery speed by up to 333%. The new artificial muscle and the MRF exoskeleton assembled a robotic manipulator and then a robotic gripper with three of those manipulators. The experimental tests verified that the loading capability of the new gripper had increased by 440% compared to the pure SMA gripper.) <|cite_end|>. It is only responsible for improving the performance of SMA. As an alternative to MRE, the application of hard magnetic particles with flexible polymer is also used to build soft grippers, as shown in the work <|cite_start|> (Reference: Magnetically actuated and guided milli-gripper for medical applications: This paper presents the design, kinematics, fabrication, and magnetic manipulation of a milli-gripper for medical applications. The design employs a permanent magnet for two purposes. It actuates the compliant gripper and allows for maneuverability of the milli-gripper in an externally applied magnetic field generated by an electromagnetic manipulation system. The modular milli-gripper can be manipulated directly or attached to the distal tip of a magnetically steered catheter. Experiments show successful actuation of the gripper and guidance of the device with the integrated gripper in both the tethered and untethered configuration.) <|cite_end|> <|cite_start|> (Reference: Design of a magnetorheological robot gripper for handling of delicate food products with varying shapes: ) <|cite_end|> <|cite_start|> (Reference: Magnetorheological Elastomer Actuated Multi-stable Gripper Reinforced Stiffness with Twisted and Coiled Polymer: ) <|cite_end|> <|cite_start|> (Reference: Magnetic Actuator with Programmable Force Distribution and Self-Sensing for Bidirectional Deformation Control: The realization of bidirectional deformation function is an important symbol of most natural creatures and intelligent flexible robots. Currently, most soft actuators are developed on the basis of transition between two different states, which means difficulties of control during the whole movement. In this work, variable distribution positions of hard magnetorheological elastomers (H‐MREs) in the matrix are used to achieve different magnetic force distributions. The programmable force distribution promotes different deformations of the magnetic actuators, which enlarges the deformation range by 51.69% under the same magnetic field. Pre‐magnetizing the H‐MREs makes them have a certain residual magnetization. Due to the residual magnetization property of H‐MREs, the magnetized actuator can not only be attracted but also be repelled by the applied magnetic field. This bidirectional deformation capability gives the actuators a wider deformation range and greater clamping force, such as the smart gripper for pinching up object. In addition, the actuators are integrated with a flexible sensing layer with high resolution and strong stability for self‐sensing. This kind of force distribution programmable technology and self‐sensing performance have the potential to broaden the application of actuators in medical equipment requiring high control accuracy.) <|cite_end|>. In this topic, the interesting review of magnetoactive materials with hard magnetic particles for the construction of microgrippers is presented in the work <|cite_start|> (Reference: Design, manufacturing and applications of small-scale magnetic soft robots: ) <|cite_end|> where the most common configuration is to control the gripper by varying external fields. It is worth noticing that magnetoactive materials with hard magnetic particles are more difficult to fabricate than MRE.
The main goal of the presented work is to investigate a novel geometry concept for the MRE gripper. It is based on the interaction between the permanent magnet and the MRE stripe. We demonstrated that these elements can create a system like a mechanical rotational spring. Additionally, it is easy to control the gripper by pulling the electromagnet. Compared to the ideas presented in <|cite_start|> (Reference: EPM–MRE: Electropermanent Magnet–Magnetorheological Elastomer for Soft Actuation System and Its Application to Robotic Grasping: Conventional soft robotic grippers have been developed based on soft pneumatic actuators or using smart soft materials, but they need external pressure sources, are easily deteriorated or used in millimetric-scale, which leads to increase of the independency, instability, and vulnerability. In this study, we propose an EPM–MRE (Electropermanent Magnet–Magnetorheological elastomer) actuation system, which strengthens the independency and stability of soft actuation by non-contact magnetic force. We established its fundamental principle and investigated parameter design through developing a suction cup as a robotic gripper. We prototyped an EPM-MRE actuated suction cup based on magnetic-charge and tensile modeling as well as optimized the structure of EPM by introducing axisymmetric EPM with frustum pole and suction cup by introducing bi-silicone structure to uniformly activate MRE membrane. The activation of EPM by different current pulses and suction force affected by contact shapes and air gaps were tested. Evaluations showed that the suction cup could be activated with 10 ms and generate a maximum suction force of 9.2 N at a steady state with 0 J energy consumption. In conclusion, the EPM-MRE soft actuation system can be used as a robotic gripper.) <|cite_end|> <|cite_start|> (Reference: 2019 2nd IEEE International Conference on Soft Robotics (RoboSoft): ) <|cite_end|> <|cite_start|> (Reference: Beyond Human Hand: Shape-Adaptive and Reversible Magnetorheological Elastomer-Based Robot Gripper Skin: Developing a simple and universal solution for gripping fragile, multiscaled, and arbitrary-shaped objects using a robot gripper is challenging. Herein, we propose a universal, shape-adaptive/retaining and reversible, hardness-variable gripper skin that serves as a resourceful solution for grasping such objects without damaging them. The proposed universal gripper skin based on a magnetorheological elastomer is attached to a robot gripper. The proposed skin takes the shape of a target object as soon as the gripper grasps the object. At this time, we solidify the gripper skin by applying a magnetic field, thereby allowing the gripper to grasp the target object easily. After releasing the objects, the magnetic field is removed and the deformed proposed gripper skin rapidly restores its original shape. The proposed adaptive gripper skin is made to grasp various target objects, such as cylinders, cuboids, and triangular prisms, based on which its grasping performance is evaluated.) <|cite_end|> <|cite_start|> (Reference: Design, Fabrication and Analysis of Magnetorheological Soft Gripper: The magnetorheological elastomer is promising material for applications in soft robotics. Its properties like reactive to external magnetic field and softness allow to construct an attractive devices. This work presents a construction of soft gripper assembled with magnetorheological elastomers. The work describes the detailed molding process of magnetorheological elastomers. Further, the electromechanical properties of magnetorheological elastomers are shown using a simple beam. Finally, the soft gripper is constructed and analyzed with the series of experiments.) <|cite_end|> <|cite_start|> (Reference: DIW 3D printing of hybrid magnetorheological materials for application in soft robotic grippers: ) <|cite_end|> <|cite_start|> (Reference: Magnetorheological elastomers—An underestimated class of soft actuator materials: In this paper, the results of various investigations on the viscoelastic and magnetic properties of magnetorheological elastomers (MRE) in magnetic fields of variable strength, are reported. These characteristics have a strong influence on the behavior of MRE in various applications such as vibration damping and tunable vibration absorbers. Moreover, the actuation capabilities of MRE with different kinds of deformation in a magnetic field are considered. The degree of deformation depends on the magnetic field strength and its gradient and can reach about 10%. When removing the magnetic field, the MRE body relaxes back to its initial shape. MRE materials can be used for linear actuators, where the MRE body is deformed due to the attraction by a magnetic circuit acting from one side. Such linear actuators may be applied for haptic feedback and pumps. However, ring-shaped MRE bodies can also deform radially around their cylindrical axis, if the magnetic field is oriented correspondingly. This unusual type of deformation allows the realization of a proportional valve, whose opening is controlled by the magnetic field strength. Similar configurations can be used for controllable seals, for locking devices and even for inchworm drives. Various versions of this actuation principle are discussed in the paper.) <|cite_end|> <|cite_start|> (Reference: Exploring the potential of magnetorheology in robotic grippers: ) <|cite_end|> a new effect of the rolling torque between the MRE and a permanent magnet is exploited to provide a self-closing state. To verify the proposed concept, the gripper prototype was built and tested in experiments. <|paper_end|> | [
"<|reference_start|> Design, fabrication and control of soft robots: <|reference_end|>",
"<|reference_start|> IEEE/RSJ International Conference on Intelligent Robots and Systems: The 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2022) will be held on October 23–27, 2022 in The Kyoto International Conference Center, Kyoto, Japan. The IROS is one of the largest and most impacting robotics research conferences worldwide. It provides an international forum for the international robotics research community to explore the frontier of science and technology in intelligent robots and smart machines. The theme of IROS 2022 is “Embodied AI for a Symbiotic Society’’. In addition to technical sessions and multi-media presentations, IROS conferences also hold panel discussions, forums, workshops, tutorials, exhibits, and technical tours to enrich the fruitful discussions among conference attendees. <|reference_end|>",
"<|reference_start|> DIW 3D printing of hybrid magnetorheological materials for application in soft robotic grippers: <|reference_end|>",
"<|reference_start|> 2019 2nd IEEE International Conference on Soft Robotics (RoboSoft): <|reference_end|>"
] | [
28,
29,
42,
52
] | {"<|multi_cite_1_1|>": "ss-2486970", "<|multi_cite_1_2|>": "ss-2086839", "<|multi_cite_1_3|>": "ss-2086840", "<|multi_cite_1_4|>": "ss-2086841", "<|multi_cite_2_1|>": "ss-2086842", "<|multi_cite_2_2|>": "ss-2086843", "<|multi_cite_3_1|>": "ss-2086839", "<|multi_cite_3_2|>": "ss-2542089", "<|multi_cite_3_3|>": "ss-2086844", "<|multi_cite_4_1|>": "ss-2086845", "<|multi_cite_4_2|>": "ss-770329", "<|multi_cite_4_3|>": "ss-2086846", "<|multi_cite_5_1|>": "ss-2086842", "<|multi_cite_5_2|>": "ss-2542089", "<|multi_cite_5_3|>": "ss-2086844", "<|multi_cite_5_4|>": "ss-2086847", "<|cite_6|>": "ss-2086839", "<|multi_cite_7_1|>": "ss-2086844", "<|multi_cite_7_2|>": "ss-2086848", "<|cite_8|>": "ss-2086848", "<|multi_cite_9_1|>": "ss-1105714", "<|multi_cite_9_2|>": "ss-2486970", "<|multi_cite_9_3|>": "ss-1345989", "<|multi_cite_9_4|>": "ss-1262129", "<|multi_cite_9_5|>": "ss-2086849", "<|multi_cite_10_1|>": "ss-2086850", "<|multi_cite_10_2|>": "ss-2086851", "<|cite_11|>": "ss-744764", "<|multi_cite_12_1|>": "ss-1105714", "<|multi_cite_12_2|>": "ss-770619", "<|cite_13|>": "ss-2486970", "<|multi_cite_14_1|>": "ss-2086852", "<|multi_cite_14_2|>": "ss-1304834", "<|multi_cite_15_1|>": "ss-2086853", "<|multi_cite_15_2|>": "ss-1078538", "<|multi_cite_15_3|>": "ss-2086854", "<|multi_cite_15_4|>": "ss-2086847", "<|multi_cite_15_5|>": "ss-2086855", "<|cite_16|>": "ss-1078538", "<|cite_17|>": "ss-2086853", "<|cite_18|>": "ss-2086854", "<|multi_cite_19_1|>": "ss-2086847", "<|multi_cite_19_2|>": "ss-2086855", "<|multi_cite_20_1|>": "ss-2086856", "<|multi_cite_20_2|>": "ss-2086857", "<|cite_21|>": "ss-1614889", "<|multi_cite_22_1|>": "ss-2086858", "<|multi_cite_22_2|>": "ss-2086850", "<|multi_cite_22_3|>": "ss-2086859", "<|multi_cite_22_4|>": "ss-2086860", "<|cite_23|>": "ss-2086841", "<|multi_cite_24_1|>": "ss-2086853", "<|multi_cite_24_2|>": "ss-1078538", "<|multi_cite_24_3|>": "ss-2086854", "<|multi_cite_24_4|>": "ss-2086847", "<|multi_cite_24_5|>": "ss-2086855", "<|multi_cite_24_6|>": "ss-2086844", "<|multi_cite_24_7|>": "ss-2086848"} |
2403.19273 | <|paper_start|> Title: A Machine Learning Approach for Crop Yield and Disease Prediction Integrating Soil Nutrition and Weather Factors
Abstract: A Machine Learning Approach for Crop Yield and Disease Prediction Integrating Soil Nutrition and Weather Factors: The development of an intelligent agricultural decision-supporting system for crop selection and disease forecasting in Bangladesh is the main objective of this work. The economy of the nation depends heavily on agriculture. However, choosing crops with better production rates and efficiently controlling crop disease are obstacles that farmers have to face. These issues are addressed in this research by utilizing machine learning methods and real-world datasets. The recommended approach uses a variety of datasets on the production of crops, soil conditions, agro-meteorological regions, crop disease, and meteorological factors. These datasets offer insightful information on disease trends, soil nutrition demand of crops, and agricultural production history. By incorporating this knowledge, the model first recommends the list of primarily selected crops based on the soil nutrition of a particular user location. Then the predictions of meteorological variables like temperature, rainfall, and humidity are made using SARIMAX models. These weather predictions are then used to forecast the possibilities of diseases for the primary crops list by utilizing the support vector classifier. Finally, the developed model makes use of the decision tree regression model to forecast crop yield and provides a final crop list along with associated possible disease forecast. Utilizing the outcome of the model, farmers may choose the best productive crops as well as prevent crop diseases and reduce output losses by taking preventive actions. Consequently, planning and decision-making processes are supported and farmers can predict possible crop yields. Overall, by offering a detailed decision support system for crop selection and disease prediction, this work can play a vital role in advancing agricultural practices in Bangladesh.
Introduction
Agriculture remains a central lifeline of Bangladesh's economy, contributing to 12\% of the nation's GDP and employing 45\% of its workforce <|cite_start|> (Reference: Role of agriculture in Bangladesh economy: uncovering the problems and challenges: Although modern economy is largely dependent on industrialization, agriculture remains the lifeblood for the economy of Bangladesh. Agriculture has been functioning in Bangladesh since long as a catalyst for sustainable development and growth of the country. Over time, the share of agriculture in GDP has significantly declined in Bangladesh but the contribution of agriculture to non-agricultural growth has maintained an upward trend. Thus, agricultural sector remains an irreplaceable driving force for economic growth of the country. Based on secondary data, the study intends to describe the role of agriculture in the economy of Bangladesh with a focus on problems and challenges of the sector. The main reason behind the loss of agricultural land in Bangladesh is the growth of rural housing followed by urbanization and industrialization. Residences of increasing population of the country are expanding at the cost of agricultural land. Despite many prospects of agriculture sector, some challenges are still present there. In order to address the challenges, a number of collaborative and coordinated steps should be initiated. As the food security is a major concern for Bangladesh, necessary steps should be taken to conserve agricultural land from its shifting to non-agricultural utilization.) <|cite_end|>. Nevertheless, increased population pressure and various challenges are testing the sector's capabilities. Key problems include limited knowledge of appropriate crop selection considering soil nutrition, weather forecasting limitations, and vulnerability to pests and diseases. Modernizing agriculture is essential, and the implementation of machine learning and artificial intelligence knowledge in the age of the fourth industrial revolution can instigate such transformation.
Crops cultivated in Bangladesh are influenced by numerous factors including soil nutrients, weather, and disease risks, all varying across the country's regions. Recognizing these regulators is crucial as not all crops are suitable for all areas. Soil testing is particularly important for understanding soil composition and nutrient levels, paving the way for better crop selection <|cite_start|> (Reference: Classification of soil and crop suggestion using machine learning techniques: Agriculture is the major source for living for the people of India. Agriculture research is the major source of economy for the country. Soil is an important key factor for agriculture .There are several soil varieties in India. In order to predict the type of crop that can be cultivated in that particular soil type we need to understand the features and characteristics of the soil type. Machine learning techniques provides a flexible way in this case. Classifying the soil according to the soil nutrients is much beneficial or the famers to predict which crop can be cultivated in a particular soil type. Data mining and machine learning is still an emerging technique in the field of agriculture and horticulture. In this paper we have proposed a method for classifying the soil according to the macro nutrients and micro nutrients and predicting the type of crop that can be cultivate in that particular soil type. Several type of machine learning algorithms are used such as K-Nearest Neighbour (k-NN), Bagged tree, Support vector machine(SVM) and logistic regression. KeywordsMachine learning, agriculture, soil, classification, nutrients, chemical feature, accuracy.) <|cite_end|>. Machine learning can streamline this process by analyzing soil states and suggesting suitable crops accordingly.
Weather forecasting is another crucial determinant of crop production <|cite_start|> (Reference: Climate Impacts on Agriculture: Implications for Crop Production: Changes in temperature, CO 2 , and precipitation under the scenarios of climate change for the next 30 yr present a challenge to crop production. This review focuses on the impact of temperature, CO 2 , and ozone on agronomic crops and the implications for crop production. Understanding these implications for agricultural crops is critical for developing cropping systems resilient to stresses induced by climate change. There is variation among crops in their response to CO 2 , temperature, and precipitation changes and, with the regional differences in predicted climate, a situation is created in which the responses will be further complicated. For example, the temperature effects on soybean [Glycine max (L.) Merr.] could potentially cause yield reductions of 2.4% in the South but an increase of 1.7% in the Midwest. The frequency of years when temperatures exceed thresholds for damage during critical growth stages is likely to increase for some crops and regions. The increase in CO 2 contributes significantly to enhanced plant growth and improved water use efficiency (WUE); however, there may be a downscaling of these positive impacts due to higher temperatures plants will experience during their growth cycle. A challenge is to understand the interactions of the changing climatic parameters because of the interactions among temperature, CO 2 , and precipitation on plant growth and development and also on the biotic stresses of weeds, insects, and diseases. Agronomists will have to consider the variations in temperature and precipitation as part of the production system if they are to ensure the food security required by an ever increasing population.) <|cite_end|>. By incorporating machine learning and AI into weather prediction, more accurate forecasts can be generated to support crop selection. Furthermore, machine learning's predictive capabilities can help to assess the risk level of crop diseases based on weather parameters such as temperature, rainfall, humidity and, thereby influencing crop choices.
Despite significant research in crop suggestions based on various parameters, no work has yet combined weather forecasts, soil nutrition, and disease prediction for improved crop productivity in Bangladesh. The primary objective is to blend these factors for crop recommendation. The main challenge lies in merging disease prediction with weather parameters to enhance the quality of suggestions. Data related to disease risks and weather conditions need to be meticulously organized and categorized for different crops.
The contributions of the proposed work can be summarized as follows:
\begin{itemize}
\item A unified framework is created for weather prediction in Bangladesh, integrating different meteorological variables using the SARIMAX time series model.
\item A dataset of the diseases of different crops based on weather parameters (temperature and humidity) is developed.
\item A user-friendly crop-suggesting model is introduced that aids farmers in making efficient decisions based on their soil attributes, weather forecasts, and disease risks, thereby enhancing their crop yield and profitability.
\end{itemize}
Section II delves into a review of existing literature on this topic. Our methodology is depicted thoroughly in Section III. A practical assessment of our model is exhibited in Section IV, and the paper is concluded in Section V.
Related Work
Several researchers have been attempting to use modern approaches in the agricultural field to improve crop production recently.
Hatfield et al. <|cite_start|> (Reference: An Intelligent Model to Suggest Top Productive Seasonal Crops Based on User Location in the Context of Bangladesh: ) <|cite_end|> used the Seasonal Autoregressive Integrated Moving Average model, and for estimating the crop's yield, they utilized the random forest regression technique. However, they did not include the impact of diseases on crop production.
S. Khaki et al. <|cite_start|> (Reference: Crop Yield Prediction Using Deep Neural Networks: Crop yield is a highly complex trait determined by multiple factors such as genotype, environment, and their interactions. Accurate yield prediction requires fundamental understanding of the functional relationship between yield and these interactive factors, and to reveal such relationship requires both comprehensive datasets and powerful algorithms. In the 2018 Syngenta Crop Challenge, Syngenta released several large datasets that recorded the genotype and yield performances of 2,267 maize hybrids planted in 2,247 locations between 2008 and 2016 and asked participants to predict the yield performance in 2017. As one of the winning teams, we designed a deep neural network (DNN) approach that took advantage of state-of-the-art modeling and solution techniques. Our model was found to have a superior prediction accuracy, with a root-mean-square-error (RMSE) being 12% of the average yield and 50% of the standard deviation for the validation dataset using predicted weather data. With perfect weather data, the RMSE would be reduced to 11% of the average yield and 46% of the standard deviation. We also performed feature selection based on the trained DNN model, which successfully decreased the dimension of the input space without significant drop in the prediction accuracy. Our computational results suggested that this model significantly outperformed other popular methods such as Lasso, shallow neural networks (SNN), and regression tree (RT). The results also revealed that environmental factors had a greater effect on the crop yield than genotype.) <|cite_end|> introduced a deep neural network model for forecasting crop production. They identified the significant influence of weather factors on crop production compared to genotype. They also employed the neural network model to predict the weather.
Saranya et al. <|cite_start|> (Reference: Classification of soil and crop suggestion using machine learning techniques: Agriculture is the major source for living for the people of India. Agriculture research is the major source of economy for the country. Soil is an important key factor for agriculture .There are several soil varieties in India. In order to predict the type of crop that can be cultivated in that particular soil type we need to understand the features and characteristics of the soil type. Machine learning techniques provides a flexible way in this case. Classifying the soil according to the soil nutrients is much beneficial or the famers to predict which crop can be cultivated in a particular soil type. Data mining and machine learning is still an emerging technique in the field of agriculture and horticulture. In this paper we have proposed a method for classifying the soil according to the macro nutrients and micro nutrients and predicting the type of crop that can be cultivate in that particular soil type. Several type of machine learning algorithms are used such as K-Nearest Neighbour (k-NN), Bagged tree, Support vector machine(SVM) and logistic regression. KeywordsMachine learning, agriculture, soil, classification, nutrients, chemical feature, accuracy.) <|cite_end|> used SVM, KNN, and logistic regression to predict the best crop according to soil tests and weather forecasts. Elavarasan et al. <|cite_start|> (Reference: Forecasting yield by integrating agrarian factors and machine learning models: A survey: ) <|cite_end|> applied various machine learning models, both supervised and unsupervised, to
guess crop production and saw that the expectation-maximization algorithm and the support vector machine gave finer results than the other algorithms based on various error measurement approaches. Khattab et al. <|cite_start|> (Reference: An IoT-based cognitive monitoring system for early plant disease forecast: ) <|cite_end|> focused on disease prediction
based on weather parameters. They made use of an IoT-based monitoring system to forecast early plant disease. Ryan et al. <|cite_start|> (Reference: Big data and machine learning for crop protection: Crop protection is the science and practice of managing plant diseases, weeds and other pests. Weed management and control are important given that crop yield losses caused by pests and weeds are high. However, farmers face increased complexity of weed control due to evolved resistance to herbicides. This paper first presents a brief review of some significant research efforts in crop protection using Big data with the focus on weed control and management followed by some potential applications. Some machine learning techniques for Big data analytics are also reviewed. The outlook for Big data and machine learning in crop protection is very promising. The potential of using Markov random fields (MRF) which takes into account the spatial component among neighboring sites for herbicide resistance modeling of ryegrass is then explored. To the best of our knowledge, no similar work of modeling herbicide resistance using the MRF has been reported. Experiments and data analytics have been performed on data collected from farms in Australia. Results have revealed the good performance of our approach.) <|cite_end|> utilised Markov random fields, which are responsible for the spatial element among neighbouring sites for herbicide resistance. Afrin et al. <|cite_start|> (Reference: Analysis of soil properties and climatic data to predict crop yields and cluster different agricultural regions of Bangladesh: Bangladesh, a nation renowned for its rich fertile land and a population around 160 million, earns most of its living from agriculture. The nutrient rich lands help us providing year-round crop yields that play a crucial role for the economy of Bangladesh. Thus, this is important to deliberately work on agricultural planning and prediction models to ensure economic prosperity. The advancement of crop yields is significantly dependent on soil factors like Ph, nutrients and organic substances along with climatic factors like rainfall, temperature and humidity. Data of such factors are recorded to serve the purpose of scientific and statistical analysis. With the help of applying different data mining techniques on them, we are able to determine effective parameters to predict crop yield from different locations. This paper mainly focuses on the analysis to predict Bangladesh’s four most yielding crops; wheat, jute, T-Aman and mustard. To carry out the whole experiment, we have analyzed soil properties of medium high land and high land from different sub districts of Bangladesh and also their respective climatic data and crop production of the last 6 years. For our analysis, we have applied different data mining techniques such as K-means, PAM, CLARA and DBSCAN for clustering and four linear regression methods to predict crop yields.) <|cite_end|> focused on crop yield prediction by analyzing soil properties of 28 sub-districts of Bangladesh. Crop yield predictions were made by using DBSCAN, PAM, CLARA, K-means, and other data mining techniques and four different types of linear regression. Parveen et al. gave a review of machine learning methods for agricultural crop disease prediction. It discussed the integration of meteorological factors and historical disease data and stressed the potential of machine learning in enhancing disease management tactics. Limitations in data availability and quality were found, particularly in emerging areas. The intricacy of disease connections and the requirement for precise and timely disease data were also stressed. An early crop disease prediction method using machine learning was described by Vijayakumar et al.. It investigated the use of meteorological variables and physiological information about plants to create forecasting models. To safeguard crops, the study emphasized the value of early disease identification. The complexity of disease interactions and the requirement for real-time data updates were also cited in the article as major obstacles to creating reliable and accurate disease prediction models.
Ahmed et al. <|cite_start|> (Reference: Impact of weather on crops in few northern parts of Bangladesh: HCI and machine learning based approach: As Bangladesh is an agricultural country, the economy, as well as the food security of this country, mostly depends on the production level of different crops over the year. Therefore, there exists immense pressure on exaggerated crop production due to the fast growth of the population. But, the average production level is being hampered by the bad nature of the weather. We have conducted a survey on near about 100 farmers of two northern districts of Bangladesh: Pabna and Rajshahi and assessed the impact of rough nature on production. According to farmers and agriculturalists, it is noticed that rough weather causes about 30% to 70% production shortage than expectation with all other factors remaining constant. In this study, we have adopted Human-computer interaction (HCI) based approach (Soft System Methodology-SSM) to this aspect for efficacious collaboration with root-level farmers and agricultural trainers providing ease for understanding weather-related issues on the production of crops. Finally, some machine learning algorithms were also implemented on the obtained dataset to accurately classify the range of production level of rice and a comparison is made among the algorithms based on performance metrics. Moreover, an android based application is created to depict the summary of the study.) <|cite_end|> used a Human-Computer Interaction (HCI) oriented
method namely Soft System Methodology (SSM) along with different machine learning models such as Naive Bayes, j48, Sequential Minimal Optimization, and Multiclass classifier for predicting crop yields. Aggarwal et al. <|cite_start|> (Reference: Integrated Iot Approaches for Crop Recommendation and Yield-Prediction Using Machine-Learning: In this study, we present an integrated approach utilizing IoT data and machine learning models to enhance precision agriculture. We collected an extensive IoT secondary dataset from an online data repository, including environmental parameters such as temperature, humidity, and soil nutrient levels, from various sensors deployed in agricultural fields. This dataset, consisting of over 1 million data points, provided comprehensive insights into the environmental conditions affecting crop yield. The data were preprocessed and used to develop predictive models for crop yield and recommendations. Our evaluation shows that the LightGBM, Decision Tree, and Random Forest classifiers achieved high accuracy scores of 98.90%, 98.48%, and 99.31%, respectively. The IoT data collection enabled real-time monitoring and accurate data input, significantly improving the models’ performance. These findings demonstrate the potential of combining IoT and machine learning to optimize resource use and improve crop management in smart farming. Future work will focus on expanding the dataset to include more diverse environmental factors and exploring the integration of advanced deep learning techniques for even more accurate predictions.) <|cite_end|> proposed a machine learning-based integrated solution for crop recommendation and yield prediction. It developed thorough models by combining soil properties, meteorological variables, and historical yield data. The study highlighted the advantages of including these aspects while making agricultural decisions. The report discussed the difficulties in gathering data, particularly when it comes to precise and dependable soil and meteorological data. It also suggested that models needed to be improved to fit certain agroecological zones. <|paper_end|> | [
"<|reference_start|> Climate Impacts on Agriculture: Implications for Crop Production: Changes in temperature, CO 2 , and precipitation under the scenarios of climate change for the next 30 yr present a challenge to crop production. This review focuses on the impact of temperature, CO 2 , and ozone on agronomic crops and the implications for crop production. Understanding these implications for agricultural crops is critical for developing cropping systems resilient to stresses induced by climate change. There is variation among crops in their response to CO 2 , temperature, and precipitation changes and, with the regional differences in predicted climate, a situation is created in which the responses will be further complicated. For example, the temperature effects on soybean [Glycine max (L.) Merr.] could potentially cause yield reductions of 2.4% in the South but an increase of 1.7% in the Midwest. The frequency of years when temperatures exceed thresholds for damage during critical growth stages is likely to increase for some crops and regions. The increase in CO 2 contributes significantly to enhanced plant growth and improved water use efficiency (WUE); however, there may be a downscaling of these positive impacts due to higher temperatures plants will experience during their growth cycle. A challenge is to understand the interactions of the changing climatic parameters because of the interactions among temperature, CO 2 , and precipitation on plant growth and development and also on the biotic stresses of weeds, insects, and diseases. Agronomists will have to consider the variations in temperature and precipitation as part of the production system if they are to ensure the food security required by an ever increasing population. <|reference_end|>",
"<|reference_start|> Classification of soil and crop suggestion using machine learning techniques: Agriculture is the major source for living for the people of India. Agriculture research is the major source of economy for the country. Soil is an important key factor for agriculture .There are several soil varieties in India. In order to predict the type of crop that can be cultivated in that particular soil type we need to understand the features and characteristics of the soil type. Machine learning techniques provides a flexible way in this case. Classifying the soil according to the soil nutrients is much beneficial or the famers to predict which crop can be cultivated in a particular soil type. Data mining and machine learning is still an emerging technique in the field of agriculture and horticulture. In this paper we have proposed a method for classifying the soil according to the macro nutrients and micro nutrients and predicting the type of crop that can be cultivate in that particular soil type. Several type of machine learning algorithms are used such as K-Nearest Neighbour (k-NN), Bagged tree, Support vector machine(SVM) and logistic regression. KeywordsMachine learning, agriculture, soil, classification, nutrients, chemical feature, accuracy. <|reference_end|>",
"<|reference_start|> Forecasting yield by integrating agrarian factors and machine learning models: A survey: <|reference_end|>",
"<|reference_start|> An IoT-based cognitive monitoring system for early plant disease forecast: <|reference_end|>"
] | [
2,
5,
6,
7
] | {"<|cite_1|>": "ss-755892", "<|cite_2|>": "ss-1613775", "<|cite_3|>": "ss-1318508", "<|cite_4|>": "ss-755893", "<|cite_5|>": "arxiv-190711", "<|cite_6|>": "ss-1613775", "<|cite_7|>": "ss-739616", "<|cite_8|>": "ss-1613776", "<|cite_9|>": "ss-1613777", "<|cite_10|>": "ss-1613778", "<|cite_13|>": "ss-1613779", "<|cite_14|>": "ss-1613780"} |
2406.16526-1 | <|cite_start|> (Reference: Code-based Automated Program Fixing: Many programmers, when they encounter an error, would like to have the benefit of automatic fix suggestions---as long as they are, most of the time, adequate. Initial research in this direction has generally limited itself to specific areas, such as data structure classes with carefully designed interfaces, and relied on simple approaches. To provide high-quality fix suggestions in a broad area of applicability, the present work relies on the presence of contracts in the code, and on the availability of dynamic analysis to gather evidence on the values taken by expressions derived from the program text. The ideas have been built into the AutoFix-E2 automatic fix generator. Applications of AutoFix-E2 to general-purpose software, such as a library to manipulate documents, show that the approach provides an improvement over previous techniques, in particular purely model-based approaches.) <|cite_end|> <|cite_start|> (Reference: DirectFix: Looking for Simple Program Repairs: Recent advances in program repair techniques have raised the possibility of patching bugs automatically. For an automatically generated patch to be accepted by developers, it should not only resolve the bug but also satisfy certain human-related factors including readability and comprehensibility. In this paper, we focus on the simplicity of patches (the size of changes). We present a novel semantics-based repair method that generates the simplest patch such that the program structure of the buggy program is maximally preserved. To take into account the simplicity of repairs in an efficient way (i.e., Without explicitly enumerating each repair candidate for each fault location), our method fuses fault localization and repair generation into one step. We do so by leveraging partial Max SAT constraint solving and component-based program synthesis. We compare our prototype implementation, Direct Fix, with the state-of-the-art semantics-based repair tool Sem Fix, that performs fault localization before repair generation. In our experiments with SIR programs and GNU Coreutils, Direct Fix generates repairs that are simpler than those generated by Sem Fix. Since both Direct Fix and Sem Fix are test-driven repair tools, they can introduce regressions for other tests which do not drive the repair. We found that Direct Fix causes substantially less regression errors than Sem Fix.) <|cite_end|>guide the repair process by first developing a set of constraints and then solving these constraints to derive the patches. Template-based APR techniques <|cite_start|> (Reference: {Automatic Patch Generation Learned from Human-Written Patches: Patch generation is an essential software maintenance task because most software systems inevitably have bugs that need to be fixed. Unfortunately, human resources are often insufficient to fix all reported and known bugs. To address this issue, several automated patch generation techniques have been proposed. In particular, a genetic-programming-based patch generation technique, GenProg, proposed by Weimer et al., has shown promising results. However, these techniques can generate nonsensical patches due to the randomness of their mutation operations. To address this limitation, we propose a novel patch generation approach, Pattern-based Automatic program Repair (Par), using fix patterns learned from existing human-written patches. We manually inspected more than 60,000 human-written patches and found there are several common fix patterns. Our approach leverages these fix patterns to generate program patches automatically. We experimentally evaluated Par on 119 real bugs. In addition, a user study involving 89 students and 164 developers confirmed that patches generated by our approach are more acceptable than those generated by GenProg. Par successfully generated patches for 27 out of 119 bugs, while GenProg was successful for only 16 bugs.) <|cite_end|> <|cite_start|> (Reference: Mining Fix Patterns for FindBugs Violations: In this paper, we first collect and track a large number of fixed and unfixed violations across revisions of software. The empirical analyses reveal that there are discrepancies in the distributions of violations that are detected and those that are fixed, in terms of occurrences, spread and categories, which can provide insights into prioritizing violations. To automatically identify patterns in violations and their fixes, we propose an approach that utilizes convolutional neural networks to learn features and clustering to regroup similar instances. We then evaluate the usefulness of the identified fix patterns by applying them to unfixed violations. The results show that developers will accept and merge a majority (69/116) of fixes generated from the inferred fix patterns. It is also noteworthy that the yielded patterns are applicable to four real bugs in the Defects4J major benchmark for software testing and automated repair.) <|cite_end|> <|cite_start|> (Reference: AVATAR: Fixing Semantic Bugs with Fix Patterns of Static Analysis Violations: Fix pattern-based patch generation is a promising direction in Automated Program Repair (APR). Notably, it has been demonstrated to produce more acceptable and correct patches than the patches obtained with mutation operators through genetic programming. The performance of pattern-based APR systems, however, depends on the fix ingredients mined from fix changes in development histories. Unfortunately, collecting a reliable set of bug fixes in repositories can be challenging. In this paper, we propose to investigate the possibility in an APR scenario of leveraging code changes that address violations by static bug detection tools. To that end, we build the AVATAR APR system, which exploits fix patterns of static analysis violations as ingredients for patch generation. Evaluated on the Defects4J benchmark, we show that, assuming a perfect localization of faults, AVATAR can generate correct patches to fix 34/39 bugs. We further find that AVATAR yields performance metrics that are comparable to that of the closely-related approaches in the literature. While AVATAR outperforms many of the state-of-the-art pattern-based APR systems, it is mostly complementary to current approaches. Overall, our study highlights the relevance of static bug finding tools as indirect contributors of fix ingredients for addressing code defects identified with functional test cases.) <|cite_end|> <|cite_start|> (Reference: iFixR: Bug Report driven Program Repair: Issue tracking systems are commonly used in modern software development for collecting feedback from users and developers. An ultimate automation target of software maintenance is then the systematization of patch generation for user-reported bugs. Although this ambition is aligned with the momentum of automated program repair, the literature has, so far, mostly focused on generate-and-validate setups where fault localization and patch generation are driven by a well-defined test suite. On the one hand, however, the common (yet strong) assumption on the existence of relevant test cases does not hold in practice for most development settings: many bugs are reported without the available test suite being able to reveal them. On the other hand, for many projects, the number of bug reports generally outstrips the resources available to triage them. Towards increasing the adoption of patch generation tools by practitioners, we investigate a new repair pipeline, iFixR, driven by bug reports: (1) bug reports are fed to an IR-based fault localizer; (2) patches are generated from fix patterns and validated via regression testing; (3) a prioritized list of generated patches is proposed to developers. We evaluate iFixR on the Defects4J dataset, which we enriched (i.e., faults are linked to bug reports) and carefully-reorganized (i.e., the timeline of test-cases is naturally split). iFixR generates genuine/plausible patches for 21/44 Defects4J faults with its IR-based fault localizer. iFixR accurately places a genuine/plausible patch among its top-5 recommendation for 8/13 of these faults (without using future test cases in generation-and-validation).) <|cite_end|> <|cite_start|> (Reference: FixMiner: Mining Relevant Fix Patterns for Automated Program Repair: Patching is a common activity in software development. It is generally performed on a source code base to address bugs or add new functionalities. In this context, given the recurrence of bugs across projects, the associated similar patches can be leveraged to extract generic fix actions. While the literature includes various approaches leveraging similarity among patches to guide program repair, these approaches often do not yield fix patterns that are tractable and reusable as actionable input to APR systems. In this paper, we propose a systematic and automated approach to mining relevant and actionable fix patterns based on an iterative clustering strategy applied to atomic changes within patches. The goal of FixMiner is thus to infer separate and reusable fix patterns that can be leveraged in other patch generation systems. Our technique, FixMiner, leverages Rich Edit Script which is a specialized tree structure of the edit scripts that captures the AST-level context of the code changes. FixMiner uses different tree representations of Rich Edit Scripts for each round of clustering to identify similar changes. These are abstract syntax trees, edit actions trees, and code context trees. We have evaluated FixMiner on thousands of software patches collected from open source projects. Preliminary results show that we are able to mine accurate patterns, efficiently exploiting change information in Rich Edit Scripts. We further integrated the mined patterns to an automated program repair prototype, PARFixMiner, with which we are able to correctly fix 26 bugs of the Defects4J benchmark. Beyond this quantitative performance, we show that the mined fix patterns are sufficiently relevant to produce patches with a high probability of correctness: 81% of PARFixMiner's generated plausible patches are correct.) <|cite_end|> <|cite_start|> (Reference: {History Driven Program Repair: Effective automated program repair techniques have great potential to reduce the costs of debugging and maintenance. Previously proposed automated program repair (APR) techniques often follow a generate-and-validate and test-case-driven procedure: They first randomly generate a large pool of fix candidates and then exhaustively validate the quality of the candidates by testing them against existing or provided test suites. Unfortunately, many real-world bugs cannot be repaired by existing techniques even after more than 12 hours of computation in a multi-core cloud environment. More work is needed to advance the capabilities of modern APR techniques. We propose a new technique that utilizes the wealth of bug fixesacross projects in their development history to effectively guide and drive a programrepair process. Our main insight is that recurring bug fixes are common inreal-world applications, and that previously-appearing fix patterns canprovide useful guidance to an automated repair technique. Based on this insight, our technique first automaticallymines bug fix patterns from the history of many projects. We then employ existingmutation operators to generate fix candidates for a given buggy program. Candidates that match frequently occurring historical bug fixes are consideredmore likely to be relevant, and we thus give them priority inthe random search process. Finally, candidates thatpass all the previously failed test cases are recommended as likely fixes. We compare our technique against existinggenerate-and-validate and test-driven APR approaches using 90 bugs from 5 Javaprograms. The experiment results show that our technique can producegood-quality fixes for many more bugs as compared to the baselines, while beingreasonably computationally efficient: it takes less than 20minutes, on average, to correctly fix a bug.) <|cite_end|> <|cite_start|> (Reference: TBar: Revisiting Template-based Automated Program Repair: We revisit the performance of template-based APR to build comprehensive knowledge about the effectiveness of fix patterns, and to highlight the importance of complementary steps such as fault localization or donor code retrieval. To that end, we first investigate the literature to collect, summarize and label recurrently-used fix patterns. Based on the investigation, we build TBar, a straightforward APR tool that systematically attempts to apply these fix patterns to program bugs. We thoroughly evaluate TBar on the Defects4J benchmark. In particular, we assess the actual qualitative and quantitative diversity of fix patterns, as well as their effectiveness in yielding plausible or correct patches. Eventually, we find that, assuming a perfect fault localization, TBar correctly/plausibly fixes 74/101 bugs. Replicating a standard and practical pipeline of APR assessment, we demonstrate that TBar correctly fixes 43 bugs from Defects4J, an unprecedented performance in the literature (including all approaches, i.e., template-based, stochastic mutation-based or synthesis-based APR).) <|cite_end|> <|cite_start|> (Reference: {Staged Program Repair with Condition Synthesis: We present SPR, a new program repair system that combines staged program repair and condition synthesis. These techniques enable SPR to work productively with a set of parameterized transformation schemas to generate and efficiently search a rich space of program repairs. Together these techniques enable SPR to generate correct repairs for over five times as many defects as previous systems evaluated on the same benchmark set.) <|cite_end|>rely on various targeted repair templates (also called ``fix pattern'' in the literature) to generate patches, which have good repair effects on specific types of software defects.
Enlightened by the huge success of deep learning in a wide variety of application areas, researchers have also investigated the use of deep learning for the APR task in recent years. As a result, there are an abundance of DL-based APR techniques in the literature. Gupta et al. <|cite_start|> (Reference: {DeepFix: Fixing Common C Language Errors by Deep Learning: The problem of automatically fixing programming errors is a very active research topic in software engineering. This is a challenging problem as fixing even a single error may require analysis of the entire program. In practice, a number of errors arise due to programmer's inexperience with the programming language or lack of attention to detail. We call these common programming errors. These are analogous to grammatical errors in natural languages. Compilers detect such errors, but their error messages are usually inaccurate. In this work, we present an end-to-end solution, called DeepFix, that can fix multiple such errors in a program without relying on any external tool to locate or fix them. At the heart of DeepFix is a multi-layered sequence-to-sequence neural network with attention which is trained to predict erroneous program locations along with the required correct statements. On a set of 6971 erroneous C programs written by students for 93 programming tasks, DeepFix could fix 1881 (27%) programs completely and 1338 (19%) programs partially.) <|cite_end|>propose DeepFix, an APR model to repair compilation defects in C language code. DeepFix is a multi-layered sequence-to-sequence neural network that directly predicts defect locations and the corresponding correct code for the defect. White et al. <|cite_start|> (Reference: DeepRepair: Style-Guided Repairing for Deep Neural Networks in the Real-World Operational Environment: Deep neural networks (DNNs) are continuously expanding their application to various domains due to their high performance. Nevertheless, a well-trained DNN after deployment could oftentimes raise errors during practical use in the operational environment due to the mismatching between distributions of the training dataset and the potential unknown noise factors in the operational environment, e.g., weather, blur, noise, etc. Hence, it poses a rather important problem for the DNNs’ real-world applications: how to repair the deployed DNNs for correcting the failure samples under the deployed operational environment while not harming their capability of handling normal or clean data with limited failure samples we can collect. In this article, we propose a style-guided data augmentation for repairing DNN in the operational environment, which learns and introduces the unknown failure patterns within the failure samples into the training data via the style transfer. Moreover, we further propose the clustering-based failure data generation for much more effective style-guided data augmentation. We conduct a large-scale evaluation with 15 degradation factors that may happen in the real world and compare with four state-of-the-art data augmentation methods and two DNN repairing methods. Our technique successfully repairs three convolutional neural networks and two recurrent neural networks with averaging 62.88% and 39.02% accuracy enhancements on the 15 failure patterns, respectively, achieving higher repairing performance than state-of-the-art repairing methods on the most failure patterns with even better accuracy on clean datasets.) <|cite_end|>propose an APR model named DeepRepair, which infers code similarity through deep learning. This technology can sort code fragments based on their similarities with suspicious elements and can convert statements by mapping identifiers outside the scope to similar identifiers within the scope. Chen et al. <|cite_start|> (Reference: SequenceR: Sequence-to-Sequence Learning for End-to-End Program Repair: This paper presents a novel end-to-end approach to program repair based on sequence-to-sequence learning. We devise, implement, and evaluate a system, called SequenceR, for fixing bugs based on sequence-to-sequence learning on source code. This approach uses the copy mechanism to overcome the unlimited vocabulary problem that occurs with big code. Our system is data-driven; we train it on 35,578 samples, carefully curated from commits to open-source repositories. We evaluate it on 4,711 independent real bug fixes, as well on the Defects4J benchmark used in program repair research. SequenceR is able to perfectly predict the fixed line for 950/4711 testing samples, and find correct patches for 14 bugs in Defects4J. It captures a wide range of repair operators without any domain-specific top-down design.) <|cite_end|>propose a technology named SequenceR for end-to-end APR on top of the sequence-to-sequence model, which uses abstract context to simulate the process of analyzing and fixing bugs conducted by developers. Lutellier et al. <|cite_start|> (Reference: CoCoNuT: combining context-aware neural translation models using ensemble
for program repair: Automated generate-and-validate (GV) program repair techniques (APR) typically rely on hard-coded rules, thus only fixing bugs following specific fix patterns. These rules require a significant amount of manual effort to discover and it is hard to adapt these rules to different programming languages. To address these challenges, we propose a new G&V technique—CoCoNuT, which uses ensemble learning on the combination of convolutional neural networks (CNNs) and a new context-aware neural machine translation (NMT) architecture to automatically fix bugs in multiple programming languages. To better represent the context of a bug, we introduce a new context-aware NMT architecture that represents the buggy source code and its surrounding context separately. CoCoNuT uses CNNs instead of recurrent neural networks (RNNs), since CNN layers can be stacked to extract hierarchical features and better model source code at different granularity levels (e.g., statements and functions). In addition, CoCoNuT takes advantage of the randomness in hyperparameter tuning to build multiple models that fix different bugs and combines these models using ensemble learning to fix more bugs. Our evaluation on six popular benchmarks for four programming languages (Java, C, Python, and JavaScript) shows that CoCoNuT correctly fixes (i.e., the first generated patch is semantically equivalent to the developer’s patch) 509 bugs, including 309 bugs that are fixed by none of the 27 techniques with which we compare.) <|cite_end|>propose CoCoNuT, a technology for APR using a neural machine translation model based on convolutional neural networks. Zhu et al. <|cite_start|> (Reference: A Syntax-Guided Edit Decoder for Neural Program Repair: Automated Program Repair (APR) helps improve the efficiency of software development and maintenance. Recent APR techniques use deep learning, particularly the encoder-decoder architecture, to generate patches. Though existing DL-based APR approaches have proposed different encoder architectures, the decoder remains to be the standard one, which generates a sequence of tokens one by one to replace the faulty statement. This decoder has multiple limitations: 1) allowing to generate syntactically incorrect programs, 2) inefficiently representing small edits, and 3) not being able to generate project-specific identifiers. In this paper, we propose Recoder, a syntax-guided edit decoder with placeholder generation. Recoder is novel in multiple aspects: 1) Recoder generates edits rather than modified code, allowing efficient representation of small edits; 2) Recoder is syntax-guided, with the novel provider/decider architecture to ensure the syntactic correctness of the patched program and accurate generation; 3) Recoder generates placeholders that could be instantiated as project-specific identifiers later. We conduct experiments to evaluate Recoder on 395 bugs from Defects4J v1.2 and 420 additional bugs from Defects4J v2.0. Our results show that Recoder repairs 53 bugs on Defects4J v1.2, which achieves 21.4% improvement over the previous state-of-the-art approach for single-hunk bugs (TBar). Importantly, to our knowledge, Recoder is the first DL-based APR approach that has outperformed the traditional APR approaches on this dataset. Furthermore, Recoder also repairs 19 bugs on the additional bugs from Defects4J v2.0, which is 137.5% more than TBar (8 bugs) and 850% more than SimFix (2 bugs). This result suggests that Recoder has better generalizability than existing APR approaches.) <|cite_end|>propose Recoder, which constrains the output of the APR model via syntax rules to repair fine-grained erroneous sentences. Ye et al. <|cite_start|> (Reference: Neural Program Repair with Execution-based Backpropagation: Neural machine translation (NMT) architectures have achieved promising results for automatic program repair. Yet, they have the limitation of generating low-quality patches (e.g., not compilable patches). This is because the existing works only optimize a purely syntactic loss function based on characters and tokens without incorporating program-specific information during neural network weight optimization. In this paper, we propose a novel program repair model called RewardRepair. The core novelty of RewardRepair is to improve NMT-based program repair with a loss function based on program compilation and test execution information, rewarding the network to produce patches that compile and that do not overfit. We conduct several experiments to evaluate RewardRepair showing that it is feasible and effective to use compilation and test execution results to optimize the underlying neural repair model. RewardRepair correctly repairs 207 bugs over four benchmarks. we report on repair success for 121 bugs that are fixed for the first time in the literature. Also, RewardRepair produces up to 45.3% of compilable patches, an improvement over the 39% by the state-of-the-art.) <|cite_end|>propose RewardRepair, which adds test information to the model to ensure that candidate patches are compilable. Xia et al. <|cite_start|> (Reference: Less Training, More Repairing Please: Revisiting Automated Program Repair via Zero-shot Learning: Due to the promising future of Automated Program Repair (APR), researchers have proposed various APR techniques, including heuristic-based, template-based, and constraint-based techniques. Among such classic APR techniques, template-based techniques have been widely recognized as state of the art. However, such template-based techniques require predefined templates to perform repair, and their effectiveness is thus limited. To this end, researchers leveraged the recent advances in Deep Learning to further improve APR. Such learning-based techniques view APR as a Neural Machine Translation problem, using the buggy/fixed code snippets as the source/target languages for translation. In this way, such techniques heavily rely on large numbers of high-quality bug-fixing commits, which can be extremely costly and challenging to construct. Furthermore, the edit variety of these learning-based techniques are limited to the available bug-fixes within their training datasets. Therefore, in this paper, we aim to revisit the learning-based APR problem, and propose AlphaRepair, to leverage zero-shot learning directly using large pre-trained code models for APR. Our main insight is instead of modeling what a repair edit should look like, we can directly predict what the correct code is based on the context information. We have implemented AlphaRepair as a practical multilingual APR tool based on the recent CodeBERT model. Our results on the widely used Defects4J benchmark show that AlphaRepair can substantially outperform state-of-the-art APR tools. We also studied the impact of different design choices and show that AlphaRepair performs even better on a newer version of Defects4J (2.0) with 3.3X more fixes than best performing baseline, indicating that AlphaRepair can potentially avoid the dataset-overfitting issue of existing learning-based techniques.) <|cite_end|>treat program repair tasks as text fill-in-the-blanks and generate patches based on contextual information. Zhu et al. <|cite_start|> (Reference: Tare: Type-aware neural program repair: Automated program repair (APR) aims to reduce the effort of software development. With the development of deep learning, lots of DL-based APR approaches have been proposed using an encoder-decoder architecture. Despite the promising performance, these models share the same limitation: generating lots of untypable patches. The main reason for this phenomenon is that the existing models do not consider the constraints of code captured by a set of typing rules. In this paper, we propose, Tare, a type-aware model for neural program repair to learn the typing rules. To encode an individual typing rule, we introduce three novel components: (1) a novel type of grammars, T-Grammar, that integrates the type information into a standard grammar, (2) a novel representation of code, T-Graph, that integrates the key information needed for type checking an AST, and (3) a novel type-aware neural program repair approach, Tare, that encodes the T-Graph and generates the patches guided by T-Grammar. The experiment was conducted on three benchmarks, 393 bugs from Defects4J v1.2, 444 additional bugs from Defects4J v2.0, and 40 bugs from QuixBugs. Our results show that Tare repairs 62, 32, and 27 bugs on these benchmarks respectively, and outperforms the existing APR approaches on all benchmarks. Further analysis also shows that Tare tends to generate more compilable patches than the existing DL-based APR approaches with the typing rule information.) <|cite_end|>propose a type-aware model for the APR task to reduce unusable patches.
At present, a majority of these DL-based APR techniques basically are built on top of sequence-to-sequence models and generate correct code in the AR manner. The AR model requires that the output of each step waits for the output of the previous position in order, resulting in slow reasoning. Consequently, this use of AR manner leads to the inability of real-time repair and huge time delays for repairing real-life complex bugs, which typically involves modifications to long code snippets. These negative consequences create obstacles to the adoption of DL-based APR techniques in real-life software development and maintenance.
\vspace{1.0mm}
\noindent
\textbf{Non-autoregressive Models.} The purpose of NAR models is to reduce inference time by generating target sentences in parallel. Gu et al. <|cite_start|> (Reference: Non-Autoregressive Neural Machine Translation: Existing approaches to neural machine translation condition each output word on previously generated outputs. We introduce a model that avoids this autoregressive property and produces its outputs in parallel, allowing an order of magnitude lower latency during inference. Through knowledge distillation, the use of input token fertilities as a latent variable, and policy gradient fine-tuning, we achieve this at a cost of as little as 2.0 BLEU points relative to the autoregressive Transformer network used as a teacher. We demonstrate substantial cumulative improvements associated with each of the three aspects of our training strategy, and validate our approach on IWSLT 2016 English-German and two WMT language pairs. By sampling fertilities in parallel at inference time, our non-autoregressive model achieves near-state-of-the-art performance of 29.8 BLEU on WMT 2016 English-Romanian.) <|cite_end|>propose the concept and the first NAR model, which assumes that all words in the target sentence are independent of each other and can output all target words in parallel in one step. Bao et al. <|cite_start|> (Reference: Non-autoregressive Transformer by Position Learning: Non-autoregressive models are promising on various text generation tasks. Previous work hardly considers to explicitly model the positions of generated words. However, position modeling is an essential problem in non-autoregressive text generation. In this study, we propose PNAT, which incorporates positions as a latent variable into the text generative process. Experimental results show that PNAT achieves top results on machine translation and paraphrase generation tasks, outperforming several strong baselines.) <|cite_end|>propose PNAR, which uses latent variables to capture the arrangement information of target words for making the arrangement of target words more appropriate. Shu et al. <|cite_start|> (Reference: Latent-Variable Non-Autoregressive Neural Machine Translation with Deterministic Inference Using a Delta Posterior: Although neural machine translation models reached high translation quality, the autoregressive nature makes inference difficult to parallelize and leads to high translation latency. Inspired by recent refinement-based approaches, we propose LaNMT, a latent-variable non-autoregressive model with continuous latent variables and deterministic inference procedure. In contrast to existing approaches, we use a deterministic inference algorithm to find the target sequence that maximizes the lowerbound to the log-probability. During inference, the length of translation automatically adapts itself. Our experiments show that the lowerbound can be greatly increased by running the inference algorithm, resulting in significantly improved translation quality. Our proposed model closes the performance gap between non-autoregressive and autoregressive approaches on ASPEC Ja-En dataset with 8.6x faster decoding. On WMT'14 En-De dataset, our model narrows the gap with autoregressive baseline to 2.0 BLEU points with 12.5x speedup. By decoding multiple initial latent variables in parallel and rescore using a teacher model, the proposed model further brings the gap down to 1.0 BLEU point on WMT'14 En-De task with 6.8x speedup.) <|cite_end|>use a spherical Gaussian to generate latent variables for each input word to increase the dependence between words in the target sentence. Ran et al. <|cite_start|> (Reference: Guiding Non-Autoregressive Neural Machine Translation Decoding with Reordering Information: Non-autoregressive neural machine translation (NAT) generates each target word in parallel and has achieved promising inference acceleration. However, existing NAT models still have a big gap in translation quality compared to autoregressive neural machine translation models due to the enormous decoding space. To address this problem, we propose a novel NAT framework named ReorderNAT which explicitly models the reordering information in the decoding procedure. We further introduce deterministic and non-deterministic decoding strategies that utilize reordering information to narrow the decoding search space in our proposed ReorderNAT. Experimental results on various widely-used datasets show that our proposed model achieves better performance compared to existing NAT models, and even achieves comparable translation quality as autoregressive translation models with a significant speedup.) <|cite_end|>use latent variables to establish the position information of the target word. Ma et al. <|cite_start|> (Reference: FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow: Most sequence-to-sequence (seq2seq) models are autoregressive; they generate each token by conditioning on previously generated tokens. In contrast, non-autoregressive seq2seq models generate all tokens in one pass, which leads to increased efficiency through parallel processing on hardware such as GPUs. However, directly modeling the joint distribution of all tokens simultaneously is challenging, and even with increasingly complex model structures accuracy lags significantly behind autoregressive models. In this paper, we propose a simple, efficient, and effective model for non-autoregressive sequence generation using latent variable models. Specifically, we turn to generative flow, an elegant technique to model complex distributions using neural networks, and design several layers of flow tailored for modeling the conditional density of sequential latent variables. We evaluate this model on three neural machine translation (NMT) benchmark datasets, achieving comparable performance with state-of-the-art non-autoregressive NMT models and almost constant decoding time w.r.t the sequence length.) <|cite_end|>use generative flow to model latent variables containing rich information. Stern et al. <|cite_start|> (Reference: Insertion Transformer: Flexible Sequence Generation via Insertion Operations: We present the Insertion Transformer, an iterative, partially autoregressive model for sequence generation based on insertion operations. Unlike typical autoregressive models which rely on a fixed, often left-to-right ordering of the output, our approach accommodates arbitrary orderings by allowing for tokens to be inserted anywhere in the sequence during decoding. This flexibility confers a number of advantages: for instance, not only can our model be trained to follow specific orderings such as left-to-right generation or a binary tree traversal, but it can also be trained to maximize entropy over all valid insertions for robustness. In addition, our model seamlessly accommodates both fully autoregressive generation (one insertion at a time) and partially autoregressive generation (simultaneous insertions at multiple locations). We validate our approach by analyzing its performance on the WMT 2014 English-German machine translation task under various settings for training and decoding. We find that the Insertion Transformer outperforms many prior non-autoregressive approaches to translation at comparable or better levels of parallelism, and successfully recovers the performance of the original Transformer while requiring only logarithmically many iterations during decoding.) <|cite_end|>propose a NAR model based on insertion operations, which generates a subsequence of the final result sequence through iteration at each step until all insertion operations are empty and the iteration ends. However, given the three major limitations outlined in Section 1, naively using existing NAR models for the APR task cannot obtain satisfactory results. Therefore, we propose the NARRepair model in this paper to meet the unique needs of the APR task.
\begin{figure*}[]
\centering
\includegraphics[width=1\textwidth]{2.png}
\caption{\label{fig:frog2}An overview of the NARRepair architecture. }
\end{figure*} <|paper_end|> | [
"<|reference_start|> {History Driven Program Repair: Effective automated program repair techniques have great potential to reduce the costs of debugging and maintenance. Previously proposed automated program repair (APR) techniques often follow a generate-and-validate and test-case-driven procedure: They first randomly generate a large pool of fix candidates and then exhaustively validate the quality of the candidates by testing them against existing or provided test suites. Unfortunately, many real-world bugs cannot be repaired by existing techniques even after more than 12 hours of computation in a multi-core cloud environment. More work is needed to advance the capabilities of modern APR techniques. We propose a new technique that utilizes the wealth of bug fixesacross projects in their development history to effectively guide and drive a programrepair process. Our main insight is that recurring bug fixes are common inreal-world applications, and that previously-appearing fix patterns canprovide useful guidance to an automated repair technique. Based on this insight, our technique first automaticallymines bug fix patterns from the history of many projects. We then employ existingmutation operators to generate fix candidates for a given buggy program. Candidates that match frequently occurring historical bug fixes are consideredmore likely to be relevant, and we thus give them priority inthe random search process. Finally, candidates thatpass all the previously failed test cases are recommended as likely fixes. We compare our technique against existinggenerate-and-validate and test-driven APR approaches using 90 bugs from 5 Javaprograms. The experiment results show that our technique can producegood-quality fixes for many more bugs as compared to the baselines, while beingreasonably computationally efficient: it takes less than 20minutes, on average, to correctly fix a bug. <|reference_end|>",
"<|reference_start|> {Staged Program Repair with Condition Synthesis: We present SPR, a new program repair system that combines staged program repair and condition synthesis. These techniques enable SPR to work productively with a set of parameterized transformation schemas to generate and efficiently search a rich space of program repairs. Together these techniques enable SPR to generate correct repairs for over five times as many defects as previous systems evaluated on the same benchmark set. <|reference_end|>",
"<|reference_start|> Neural Program Repair with Execution-based Backpropagation: Neural machine translation (NMT) architectures have achieved promising results for automatic program repair. Yet, they have the limitation of generating low-quality patches (e.g., not compilable patches). This is because the existing works only optimize a purely syntactic loss function based on characters and tokens without incorporating program-specific information during neural network weight optimization. In this paper, we propose a novel program repair model called RewardRepair. The core novelty of RewardRepair is to improve NMT-based program repair with a loss function based on program compilation and test execution information, rewarding the network to produce patches that compile and that do not overfit. We conduct several experiments to evaluate RewardRepair showing that it is feasible and effective to use compilation and test execution results to optimize the underlying neural repair model. RewardRepair correctly repairs 207 bugs over four benchmarks. we report on repair success for 121 bugs that are fixed for the first time in the literature. Also, RewardRepair produces up to 45.3% of compilable patches, an improvement over the 39% by the state-of-the-art. <|reference_end|>",
"<|reference_start|> FlowSeq: Non-Autoregressive Conditional Sequence Generation with Generative Flow: Most sequence-to-sequence (seq2seq) models are autoregressive; they generate each token by conditioning on previously generated tokens. In contrast, non-autoregressive seq2seq models generate all tokens in one pass, which leads to increased efficiency through parallel processing on hardware such as GPUs. However, directly modeling the joint distribution of all tokens simultaneously is challenging, and even with increasingly complex model structures accuracy lags significantly behind autoregressive models. In this paper, we propose a simple, efficient, and effective model for non-autoregressive sequence generation using latent variable models. Specifically, we turn to generative flow, an elegant technique to model complex distributions using neural networks, and design several layers of flow tailored for modeling the conditional density of sequential latent variables. We evaluate this model on three neural machine translation (NMT) benchmark datasets, achieving comparable performance with state-of-the-art non-autoregressive NMT models and almost constant decoding time w.r.t the sequence length. <|reference_end|>"
] | [
7,
9,
15,
22
] | {"<|multi_cite_1_1|>": "ss-1270798", "<|multi_cite_1_2|>": "ss-1270799", "<|multi_cite_1_3|>": "ss-763378", "<|multi_cite_1_4|>": "ss-737862", "<|multi_cite_1_5|>": "arxiv-142664", "<|multi_cite_1_6|>": "ss-1294996", "<|multi_cite_1_7|>": "arxiv-214196", "<|multi_cite_1_8|>": "ss-1273149", "<|multi_cite_1_9|>": "ss-825104", "<|multi_cite_1_10|>": "ss-825104", "<|multi_cite_1_11|>": "ss-1270845", "<|multi_cite_1_12|>": "ss-1291795", "<|multi_cite_1_13|>": "ss-804963", "<|multi_cite_1_14|>": "ss-804964", "<|multi_cite_1_15|>": "ss-1286497", "<|multi_cite_1_16|>": "ss-1291804", "<|multi_cite_1_17|>": "ss-1270844", "<|multi_cite_1_18|>": "ss-725253", "<|multi_cite_1_20|>": "arxiv-177499", "<|multi_cite_1_21|>": "arxiv-215473", "<|multi_cite_1_22|>": "arxiv-181642", "<|multi_cite_2_1|>": "ss-1270798", "<|multi_cite_2_2|>": "arxiv-143702", "<|multi_cite_2_3|>": "ss-1382603", "<|multi_cite_3_1|>": "ss-1270799", "<|multi_cite_3_2|>": "ss-763378", "<|multi_cite_3_3|>": "arxiv-179875", "<|multi_cite_4_1|>": "ss-737862", "<|multi_cite_4_2|>": "arxiv-174961", "<|multi_cite_4_3|>": "arxiv-142664", "<|multi_cite_5_1|>": "ss-1616853", "<|multi_cite_5_2|>": "arxiv-186744", "<|multi_cite_5_3|>": "arxiv-348606", "<|multi_cite_5_4|>": "arxiv-174528", "<|multi_cite_5_5|>": "ss-751579", "<|cite_6|>": "arxiv-186744", "<|cite_7|>": "arxiv-339875", "<|multi_cite_8_1|>": "ss-1861107", "<|multi_cite_8_2|>": "ss-1861108", "<|multi_cite_8_3|>": "ss-1861109", "<|multi_cite_9_1|>": "arxiv-139372", "<|multi_cite_9_2|>": "arxiv-223724", "<|cite_10|>": "arxiv-139372", "<|cite_11|>": "arxiv-259621", "<|multi_cite_12_1|>": "arxiv-175879", "<|multi_cite_12_2|>": "arxiv-249176", "<|multi_cite_12_3|>": "arxiv-216284", "<|cite_13|>": "ss-728302", "<|cite_14|>": "ss-1250879", "<|multi_cite_15_1|>": "ss-701036", "<|multi_cite_15_2|>": "ss-784992", "<|multi_cite_16_1|>": "ss-1270798", "<|multi_cite_16_2|>": "arxiv-143702", "<|multi_cite_16_3|>": "ss-1382603", "<|multi_cite_16_4|>": "ss-1382603", "<|multi_cite_17_1|>": "arxiv-179875", "<|multi_cite_17_2|>": "ss-1270799", "<|multi_cite_17_3|>": "arxiv-57718", "<|multi_cite_17_4|>": "arxiv-19019", "<|multi_cite_17_5|>": "ss-763378", "<|multi_cite_18_1|>": "ss-737862", "<|multi_cite_18_2|>": "arxiv-142664", "<|multi_cite_18_3|>": "ss-1220020", "<|multi_cite_18_4|>": "arxiv-214196", "<|multi_cite_18_5|>": "arxiv-174961", "<|multi_cite_18_6|>": "ss-1294996", "<|multi_cite_18_7|>": "arxiv-196068", "<|multi_cite_18_8|>": "ss-1294993", "<|cite_19|>": "ss-704143", "<|cite_20|>": "ss-1616853", "<|cite_21|>": "arxiv-186744", "<|cite_22|>": "ss-751579", "<|cite_23|>": "arxiv-348606", "<|cite_24|>": "arxiv-339875", "<|cite_25|>": "arxiv-434468", "<|cite_26|>": "ss-1861110", "<|cite_27|>": "arxiv-139372", "<|cite_28|>": "arxiv-236037", "<|cite_29|>": "arxiv-219487", "<|cite_30|>": "arxiv-232717", "<|cite_31|>": "arxiv-222104", "<|cite_32|>": "arxiv-190841"} |
1809.07524 | <|paper_start|> Title: Diffraction-Aware Sound Localization for a Non-Line-of-Sight Source
Abstract: Diffraction-Aware Sound Localization for a Non-Line-of-Sight Source: We present a novel sound localization algorithm for a non-line-of-sight (NLOS) sound source in indoor environments. Our approach exploits the diffraction properties of sound waves as they bend around a barrier or an obstacle in the scene. We combine a ray tracing based sound propagation algorithm with a Uniform Theory of Diffraction (UTD) model, which simulate bending effects by placing a virtual sound source on a wedge in the environment. We precompute the wedges of a reconstructed mesh of an indoor scene and use them to generate diffraction acoustic rays to localize the 3D position of the source. Our method identifies the convergence region of those generated acoustic rays as the estimated source position based on a particle filter. We have evaluated our algorithm in multiple scenarios consisting of a static and dynamic NLOS sound source. In our tested cases, our approach can localize a source position with an average accuracy error, 0.7m, measured by the L2 distance between estimated and actual source locations in a 7m*7m*3m room. Furthermore, we observe 37% to 130% improvement in accuracy over a state-of-the-art localization method that does not model diffraction effects, especially when a sound source is not visible to the robot.
Introduction
\label{sec:1}
\begin{figure}[t]
\centering
\subfloat[A Non-Line-of-Sight (NLOS) moving source scene around an
obstacle. Our method can localize its position using acoustic
sensors and our diffraction-aware ray tracing.]
{\includegraphics[width=0.8\columnwidth]{figures/9_result_environment_of_moving_v2.pdf}\label{fig:environment_moving_ss}}\\
\subfloat[Accuracy errors, measured as the L2 distance between the estimated and actual 3D locations of a sound source, for the dynamic source. Our method models diffraction effects and improves the localization accuracy as compared to only modeling indirect reflections <|cite_start|> (Reference: Reflection-Aware Sound Source Localization: We present a novel, reflection-aware method for 3D sound localization in indoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using inverse acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. We have implemented our method on a robot with a cube-shaped microphone array and tested it against different settings with continuous and intermittent sound signals with a stationary or a mobile source. Across different settings, our approach can localize the sound with an average distance error of 0.8m tested in a room of 7m by 7m area with 3m height, including a mobile and non-line-of-sight sound source. We also reveal that the modeling of indirect rays increases the localization accuracy by 40% compared to only using direct acoustic rays.) <|cite_end|>]
{\includegraphics[width=0.8\columnwidth]{figures/8_result_moving_clapping.pdf}\label{fig:result_moving_ss}}
\caption{
These figures show the testing environment (7m by 7m with 3m height) (a) and the accuracy
error of our method with the dynamically moving sound source
(b).
The source
moves along the red trajectory, and the obstacle
causes the invisible area for the dynamic source.
Invisibility of the source occurs
from 27s to 48s, where our method maintains a high accuracy, while the prior method deteriorates due to the diffraction:
the average distance errors of our and the prior method are 0.95m and 1.83m.
}
\vspace{-1em}
\label{fig:environment_resultGraph_moving_ss}
\end{figure}
As mobile robots are increasingly used for different applications, there is considerable interest in developing new and improved methods for localization. The main goal is to compute the current location of the robot with respect to its environment. Localization is a fundamental capability required by an autonomous robot, as the current location is used to guide the future movement or actions. We assume that a map of the environment is given
and different sensors on the robot are used to estimate its position and orientation in the environment.
Some of the commonly used sensors include GPS, CCD or depth cameras, acoustics, etc. In particular, there is considerable work on using acoustic sensors for localization, including sonar signal processing for underwater localization and microphone arrays for indoor and outdoor scenes. In particular, the recent use of smart microphones in commodity or IoT devices (e.g., Amazon Alexa) has triggered interest in better acoustic localization methods <|cite_start|> (Reference: Human Identification and Localization by Robots in Collaborative Environments: ) <|cite_end|> <|cite_start|> (Reference: A methodology for sound source localization and tracking: Development of 3d microphone array for near-field and far-field applications: Acoustic source localization and tracking using microphone arrays has become a focus of interest in room acoustics, teleconference systems and tracking of sound producing objects. The current methods to estimate the source localization depend on conventional time-delay estimation techniques between microphone pairs, however, ignoring the ambient noise, reflections from surrounding and reverberation in the closed space. In this study, an acoustic source localizer and tracker (ASLT) based on 3D microphone array is designed and developed for real time source detection and localization. Two practical approaches were examined and evaluated, based on direction of arrival (DOA) estimation techniques and steered power response (SPR) of array algorithms in order to improve the accuracy for tracking multiple sources in full 3D coordinates. Among time delay estimation techniques, generalized cross correlation (GCC) is employed in frequency domain using multiple combinations of microphone pairs of the array with Phase transform (PHAT) weighting function for optimum detection of sources in the presence of reverberant environments. PHAT gives a good performance in the presence of ambience, even when the signal to noise ratio (SNR) is low. For SPR, minimum variance distortion less response (MVDR) beamformer weights are evaluated and purposed for accurate source tracking applications. A microphone array is designed using six transducers in spherical configurations and used for the evaluated for proposed methodology. The measurements are carried out in a reverberant chamber under different noise conditions to validate the practicality of the algorithmic chain and finally, the results are obtained and presented to demonstrate the efficiency of the proposed microphone array design and localization technique.) <|cite_end|>,
The acoustic sensors use the properties of sound waves to compute the source location. As the sound waves are emitted from a source, they transmit through the media and either reach the listener or microphone locations as direct paths, or after undergoing different wave effects including reflections, interference, diffraction,
scattering, etc.
Some of the earliest work on sound source localization (SSL) makes use of the
time difference of
arrival (TDOA) at the receiver <|cite_start|> (Reference: {The generalized correlation method for estimation of time delay: A maximum likelihood (ML) estimator is developed for determining time delay between signals received at two spatially separated sensors in the presence of uncorrelated noise. This ML estimator can be realized as a pair of receiver prefilters followed by a cross correlator. The time argument at which the correlator achieves a maximum is the delay estimate. The ML estimator is compared with several other proposed processors of similar form. Under certain conditions the ML estimator is shown to be identical to one proposed by Hannan and Thomson [10] and MacDonald and Schultheiss [21]. Qualitatively, the role of the prefilters is to accentuate the signal passed to the correlator at frequencies for which the signal-to-noise (S/N) ratio is highest and, simultaneously, to suppress the noise power. The same type of prefiltering is provided by the generalized Eckart filter, which maximizes the S/N ratio of the correlator output. For low S/N ratio, the ML estimator is shown to be equivalent to Eckart prefiltering.) <|cite_end|> <|cite_start|> (Reference: Deep Neural Networks for Multiple Speaker Detection and Localization: We propose to use neural networks for simultaneous detection and localization of multiple sound sources in human-robot interaction. In contrast to conventional signal processing techniques, neural network-based sound source localization methods require fewer strong assumptions about the environment. Previous neural network-based methods have been focusing on localizing a single sound source, which do not extend to multiple sources in terms of detection and localization. In this paper, we thus propose a likelihood-based encoding of the network output, which naturally allows the detection of an arbitrary number of sources. In addition, we investigate the use of sub-band cross-correlation information as features for better localization in sound mixtures, as well as three different network architectures based on different motivations. Experiments on real data recorded from a robot show that our proposed methods significantly outperform the popular spatial spectrum-based approaches.) <|cite_end|>. These methods only exploit the direct sound and its direction at the receiver, and do not take into account of reflections or other wave effects. As a result, it does not provide sufficient accuracy for many applications.
Other techniques have been proposed to localize the position under different constraints or sensors <|cite_start|> (Reference: Probabilistic 3d sound source mapping using moving microphone array: The paper proposes a system for mapping the 3D location of a sound source using data from a microphone array, each of which gives an independent estimate of the direction. LiDAR is used to generate a 3D map of the environment and to estimate the location of the sensor in six degrees of freedom (6-DoF). By combing these modules, our system determines the direction to a sound source in 3D global space for a moving robot. From evaluation of the time series of the tracked sound stream, the position of the sound source is estimated by using the Monte Carlo localization approach. When estimated from a moving robot, an approach based on triangulation does not always adequately locate a sound source, because the estimate depends on the relationship between the location of the source and that of the moving robot. The main advantage of the proposed system is that as an audible signal is perceived, it continuously estimates the location of the source. We evaluate the system by performing an experiment using a hand-held sensor.) <|cite_end|> <|cite_start|> (Reference: Towards real-time 3d sound sources mapping with linear microphone arrays: In this paper, we present a method for real-time 3D sound sources mapping using an off-the-shelf robotic perception sensor equipped with a linear microphone array. Conventional approaches to map sound sources in 3D scenarios use dedicated 3D microphone arrays, as this type of arrays provide two degrees of freedom (DOF) observations. Our method addresses the problem of 3D sound sources mapping using a linear microphone array, which only provides one DOF observations making the estimation of the sound sources location more challenging. In the proposed method, multi hypotheses tracking is combined with a new sound source parametrisation to provide with a good initial guess for an online optimisation strategy. A joint optimisation is carried out to estimate 6 DOF sensor poses and 3 DOF landmarks together with the sound sources locations. Additionally, a dedicated sensor model is proposed to accurately model the noise of the Direction of Arrival (DOA) observation when using a linear microphone array. Comprehensive simulation and experimental results show the effectiveness of the proposed method. In addition, a real-time implementation of our method has been made available as open source software for the benefit of the community.) <|cite_end|> <|cite_start|> (Reference: Reflection-Aware Sound Source Localization: We present a novel, reflection-aware method for 3D sound localization in indoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using inverse acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. We have implemented our method on a robot with a cube-shaped microphone array and tested it against different settings with continuous and intermittent sound signals with a stationary or a mobile source. Across different settings, our approach can localize the sound with an average distance error of 0.8m tested in a room of 7m by 7m area with 3m height, including a mobile and non-line-of-sight sound source. We also reveal that the modeling of indirect rays increases the localization accuracy by 40% compared to only using direct acoustic rays.) <|cite_end|>. This includes modeling of higher order specular reflections <|cite_start|> (Reference: Reflection-Aware Sound Source Localization: We present a novel, reflection-aware method for 3D sound localization in indoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using inverse acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. We have implemented our method on a robot with a cube-shaped microphone array and tested it against different settings with continuous and intermittent sound signals with a stationary or a mobile source. Across different settings, our approach can localize the sound with an average distance error of 0.8m tested in a room of 7m by 7m area with 3m height, including a mobile and non-line-of-sight sound source. We also reveal that the modeling of indirect rays increases the localization accuracy by 40% compared to only using direct acoustic rays.) <|cite_end|> based on ray tracing and can model indirect sound effects.
\begin{figure*}[t]
\centering
\includegraphics[width=2\columnwidth]{figures/1_precomputation_part.pdf}
\vspace*{-0.3cm}
\caption{
This figure shows our precomputation phase.
We use SLAM to generate a
point cloud of an indoor environment
from the laser scanner and Kinect. The point cloud is used to
construct the mesh map via 3D reconstruction techniques.
Wedges whose two neighboring triangles have angles larger
than $\theta_W$ and their edges are extracted from the mesh map to consider
diffraction effects at runtime for sound localization.
}
\vspace{-1.0em}
\label{fig:blockDiagram_precomputation}
\end{figure*}
In many scenarios, the sound source is not directly in line of sight of the listener (i.e. NLOS) and is occluded by obstacles. In such cases, there may not be much contribution in terms of direct sound, and simple methods based on TDOA may not work well. We need to model indirect sound effects and the most common methods are based on using ray-based geometric propagation paths. They assume the rectilinear propagation of sound waves and use ray tracing to compute higher order reflections. While they work well for high frequency sounds, but do not model many low-frequency phenomena like diffraction that is a type of scattering that occurs from obstacles whose size is of the same order of magnitude as the wavelength. In practice, diffraction is a fundamental mode of sound wave propagation and occurs frequently in building interior (e.g. source is behind an obstacle or hidden by walls).
These effects are more prominent for low-frequency sources, such as vowel sounds in human speech,
industrial machinery, ventilation, air-conditioned units.
\paragraph{Main Results.} We present a novel sound localization algorithm that takes into diffraction effects, especially from non-light-of-sight or occluded sources. Our approach is built on a ray tracing framework and models diffraction using the Uniform Theory of Diffraction (UTD) <|cite_start|> (Reference: Geometrical theory of diffraction: Details the ideas underlying geometrical theory of diffraction (GTD) along with its relationships with other EM theories.) <|cite_end|> along the wedges.
During the precomputation phase, we use SLAM and reconstruct a 3D triangular mesh for an
indoor environment.
At runtime, we generate direct acoustic rays towards incoming sound directions as
computed by TDOA. Once the acoustic ray hits the reconstructed mesh, we generate reflection
rays. Furthermore, when acoustic rays pass close enough to the edges of mesh wedges according to our diffraction-criterion, we also generate diffraction acoustic rays to model non-visible paths to include an incident sound direction that can be actually traveled
(Sec.~\ref{sec:4}).
\Skip{
We consider not only the direct and reflection signals but only the diffraction
signals, and generate the primary, reflection, and diffraction acoustic rays in
the reconstructed environment (Sec.~\ref{sec:4}).
We suggest the notion of ~\textit{diffraction-condition} for determining the condition when diffraction rays are generated
and explain how to create the diffraction ray on the wedges in Sec.~\ref{sec_diffraction}.
}
Finally, we estimate the source position by performing generated acoustic rays using ray convergence.
We have evaluated our method in an indoor environment with three different scenarios, which include a stationary and a dynamically moving source along an obstacle that blocks the direct line-of-sight from the listener.
In these cases, the diffracted acoustic waves are used to localize the position.
We combine our diffraction method with reflection-aware SSL algorithm <|cite_start|> (Reference: Reflection-Aware Sound Source Localization: We present a novel, reflection-aware method for 3D sound localization in indoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using inverse acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. We have implemented our method on a robot with a cube-shaped microphone array and tested it against different settings with continuous and intermittent sound signals with a stationary or a mobile source. Across different settings, our approach can localize the sound with an average distance error of 0.8m tested in a room of 7m by 7m area with 3m height, including a mobile and non-line-of-sight sound source. We also reveal that the modeling of indirect rays increases the localization accuracy by 40% compared to only using direct acoustic rays.) <|cite_end|> and observe
improvements from $1.22$m to $0.7$m on average and from $1.45$m to $0.79$m for the NLOS source.
Our algorithm can localize a source generating the clapping sound within $1.38$m as the worse error bound
in
a room of dimension $7m \times 7m$ and $3$m height.
Related Work
\label{sec:2}
In this section, we give a brief overview of prior work on sound source localization and sound propagation.
\paragraph{Sound source localization (SSL).}
Over the past two decades, many approaches
have used time difference of arrival (TDOA) to localize sound sources.
Knapp
\textit{et al.} presented a good estimation of the time difference using a generalized correlation between
a pair of microphone signals <|cite_start|> (Reference: {The generalized correlation method for estimation of time delay: A maximum likelihood (ML) estimator is developed for determining time delay between signals received at two spatially separated sensors in the presence of uncorrelated noise. This ML estimator can be realized as a pair of receiver prefilters followed by a cross correlator. The time argument at which the correlator achieves a maximum is the delay estimate. The ML estimator is compared with several other proposed processors of similar form. Under certain conditions the ML estimator is shown to be identical to one proposed by Hannan and Thomson [10] and MacDonald and Schultheiss [21]. Qualitatively, the role of the prefilters is to accentuate the signal passed to the correlator at frequencies for which the signal-to-noise (S/N) ratio is highest and, simultaneously, to suppress the noise power. The same type of prefiltering is provided by the generalized Eckart filter, which maximizes the S/N ratio of the correlator output. For low S/N ratio, the ML estimator is shown to be equivalent to Eckart prefiltering.) <|cite_end|>.
He \textit{et al.} <|cite_start|> (Reference: Deep Neural Networks for Multiple Speaker Detection and Localization: We propose to use neural networks for simultaneous detection and localization of multiple sound sources in human-robot interaction. In contrast to conventional signal processing techniques, neural network-based sound source localization methods require fewer strong assumptions about the environment. Previous neural network-based methods have been focusing on localizing a single sound source, which do not extend to multiple sources in terms of detection and localization. In this paper, we thus propose a likelihood-based encoding of the network output, which naturally allows the detection of an arbitrary number of sources. In addition, we investigate the use of sub-band cross-correlation information as features for better localization in sound mixtures, as well as three different network architectures based on different motivations. Experiments on real data recorded from a robot show that our proposed methods significantly outperform the popular spatial spectrum-based approaches.) <|cite_end|> suggested a deep neural network-based
source localization
algorithm in the azimuth direction for multiple sources.
This approach focused on estimating an incoming direction of a sound and
did not localize the actual position of the source.
Recently, many techniques have been proposed to estimate the location of a sound
source <|cite_start|> (Reference: Probabilistic 3d sound source mapping using moving microphone array: The paper proposes a system for mapping the 3D location of a sound source using data from a microphone array, each of which gives an independent estimate of the direction. LiDAR is used to generate a 3D map of the environment and to estimate the location of the sensor in six degrees of freedom (6-DoF). By combing these modules, our system determines the direction to a sound source in 3D global space for a moving robot. From evaluation of the time series of the tracked sound stream, the position of the sound source is estimated by using the Monte Carlo localization approach. When estimated from a moving robot, an approach based on triangulation does not always adequately locate a sound source, because the estimate depends on the relationship between the location of the source and that of the moving robot. The main advantage of the proposed system is that as an audible signal is perceived, it continuously estimates the location of the source. We evaluate the system by performing an experiment using a hand-held sensor.) <|cite_end|> <|cite_start|> (Reference: Towards real-time 3d sound sources mapping with linear microphone arrays: In this paper, we present a method for real-time 3D sound sources mapping using an off-the-shelf robotic perception sensor equipped with a linear microphone array. Conventional approaches to map sound sources in 3D scenarios use dedicated 3D microphone arrays, as this type of arrays provide two degrees of freedom (DOF) observations. Our method addresses the problem of 3D sound sources mapping using a linear microphone array, which only provides one DOF observations making the estimation of the sound sources location more challenging. In the proposed method, multi hypotheses tracking is combined with a new sound source parametrisation to provide with a good initial guess for an online optimisation strategy. A joint optimisation is carried out to estimate 6 DOF sensor poses and 3 DOF landmarks together with the sound sources locations. Additionally, a dedicated sensor model is proposed to accurately model the noise of the Direction of Arrival (DOA) observation when using a linear microphone array. Comprehensive simulation and experimental results show the effectiveness of the proposed method. In addition, a real-time implementation of our method has been made available as open source software for the benefit of the community.) <|cite_end|>.
Sasaki \textit{et al.} <|cite_start|> (Reference: Probabilistic 3d sound source mapping using moving microphone array: The paper proposes a system for mapping the 3D location of a sound source using data from a microphone array, each of which gives an independent estimate of the direction. LiDAR is used to generate a 3D map of the environment and to estimate the location of the sensor in six degrees of freedom (6-DoF). By combing these modules, our system determines the direction to a sound source in 3D global space for a moving robot. From evaluation of the time series of the tracked sound stream, the position of the sound source is estimated by using the Monte Carlo localization approach. When estimated from a moving robot, an approach based on triangulation does not always adequately locate a sound source, because the estimate depends on the relationship between the location of the source and that of the moving robot. The main advantage of the proposed system is that as an audible signal is perceived, it continuously estimates the location of the source. We evaluate the system by performing an experiment using a hand-held sensor.) <|cite_end|>
and Su \textit{et al.} <|cite_start|> (Reference: Towards real-time 3d sound sources mapping with linear microphone arrays: In this paper, we present a method for real-time 3D sound sources mapping using an off-the-shelf robotic perception sensor equipped with a linear microphone array. Conventional approaches to map sound sources in 3D scenarios use dedicated 3D microphone arrays, as this type of arrays provide two degrees of freedom (DOF) observations. Our method addresses the problem of 3D sound sources mapping using a linear microphone array, which only provides one DOF observations making the estimation of the sound sources location more challenging. In the proposed method, multi hypotheses tracking is combined with a new sound source parametrisation to provide with a good initial guess for an online optimisation strategy. A joint optimisation is carried out to estimate 6 DOF sensor poses and 3 DOF landmarks together with the sound sources locations. Additionally, a dedicated sensor model is proposed to accurately model the noise of the Direction of Arrival (DOA) observation when using a linear microphone array. Comprehensive simulation and experimental results show the effectiveness of the proposed method. In addition, a real-time implementation of our method has been made available as open source software for the benefit of the community.) <|cite_end|> presented 3D sound source localization
algorithms using a disk-shaped
sound detector and a linear microphone array
such as Kinect and PS3 Eye.
Misra \textit{et al.} suggested a robust localization method in
noisy environments
using a drone.
This approach requires the accumulation of steady acoustic signals at different positions, and thus cannot be applied to a transient sound event or to stationary sound detectors.
An \textit{et al.} <|cite_start|> (Reference: Reflection-Aware Sound Source Localization: We present a novel, reflection-aware method for 3D sound localization in indoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using inverse acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. We have implemented our method on a robot with a cube-shaped microphone array and tested it against different settings with continuous and intermittent sound signals with a stationary or a mobile source. Across different settings, our approach can localize the sound with an average distance error of 0.8m tested in a room of 7m by 7m area with 3m height, including a mobile and non-line-of-sight sound source. We also reveal that the modeling of indirect rays increases the localization accuracy by 40% compared to only using direct acoustic rays.) <|cite_end|> presented a reflection-aware sound
source localization algorithm that used direct and reflected acoustic rays
to estimate a 3D source position in indoor environments.
Our approach is based on this work and takes into account diffraction effects to considerably improve the accuracy.
\Skip{We found that its accuracy decreases in complex environments with
many objects, mainly
due to low-frequency effects, e.g. diffraction and diffuse. In this
work, our approach focuses on addressing this issue.
}
\paragraph{Interactive sound propagation.}
There is considerable work in acoustics and physically-based modeling to develop fast and accurate sound
simulators that can generate realistic sounds for computer-aided design and virtual environments.
Geometry acoustic (GA) techniques have been widely utilized to
simulate sound propagations efficiently using ray tracing techniques. Because ray
tracing algorithms are based on the sound propagation model at high
frequencies, low-frequency wave effects like diffraction are
modeled separately.
In addition, an estimation of the acoustic impulse response between the source and
the listener was performed using Monte Carlo path tracing <|cite_start|> (Reference: High-order Diffraction and Diffuse Reflections for Interactive Sound Propagation in Large Environments: We present novel algorithms for modeling interactive diffuse reflections and higher-order diffraction in large-scale virtual environments. Our formulation is based on ray-based sound propagation and is directly applicable to complex geometric datasets. We use an incremental approach that combines radiosity and path tracing techniques to iteratively compute diffuse reflections. We also present algorithms for wavelength-dependent simplification and visibility graph computation to accelerate higher-order diffraction at runtime. The overall system can generate plausible sound effects at interactive rates in large, dynamic scenes that have multiple sound sources. We highlight the performance in complex indoor and outdoor environments and observe an order of magnitude performance improvement over previous methods.) <|cite_end|> or a hybrid combination of geometric and numeric methods techniques <|cite_start|> (Reference: Wave-ray coupling for interactive sound propagation in large complex scenes: We present a novel hybrid approach that couples geometric and numerical acoustic techniques for interactive sound propagation in complex environments. Our formulation is based on a combination of spatial and frequency decomposition of the sound field. We use numerical wave-based techniques to precompute the pressure field in the near-object regions and geometric propagation techniques in the far-field regions to model sound propagation. We present a novel two-way pressure coupling technique at the interface of near-object and far-field regions. At runtime, the impulse response at the listener position is computed at interactive rates based on the stored pressure field and interpolation techniques. Our system is able to simulate high-fidelity acoustic effects such as diffraction, scattering, low-pass filtering behind obstruction, reverberation, and high-order reflections in large, complex indoor and outdoor environments and Half-Life 2 game engine. The pressure computation requires orders of magnitude lower memory than standard wave-based numerical techniques.) <|cite_end|>.
\begin{figure*}[t]
\centering
\includegraphics[width=2\columnwidth]{figures/2_runtime_part.pdf}
\vspace*{-0.3cm}
\caption{
We show run-time computations using acoustic ray
tracing with diffraction rays for sound source localization.
The diffraction-aware acoustic ray tracing is highlighted in blue and our main contribution in this paper.
The source position estimation is performed by identifying ray convergence.
}
\vspace{-1em}
\label{fig:blockDiagram_runtime}
\end{figure*}
Exact methods to model diffraction are based on directly solving the acoustic wave equation using numeric methods like boundary or finite element methods <|cite_start|> (Reference: New higher-order boundary element methods for wave diffraction/radiation: ) <|cite_end|> <|cite_start|> (Reference: A hybrid method combining the edge source integral equation and the boundary element method for scattering problems: A hybrid method for acoustic scattering problems is studied in this paper. The boundary element method is combined with a recently developed edge diffraction based method [J. Acoust. Soc. Am. 133, pp. 3681-3691, 2013]. Although the edge diffraction method has been shown to provide accurate results for convex, rigid objects at a very attractive computational cost, it has some numerical challenges for certain radiation directions. The hybrid method suggested here has the same structure as the boundary element method (BEM): a first step where the sound pressure is calculated on the surface of the scattering object, and a second step where the scattered sound is obtained at any external receiver point. In this method, the edge diffraction based method is used for the first step and then the calculation of the scattered sound is calculated a la BEM by means of the Kirchhoff - Helmholtz Integral equation. Several benchmark cases are studied and the results are compared to different methods.) <|cite_end|> or the BTM
model <|cite_start|> (Reference: An analytic secondary source model of edge diffraction impulse responses: A new impulse-response model for the edge diffraction from finite rigid or soft wedges is presented which is based on the exact Biot–Tolstoy solution. The new model is an extension of the work by Medwin et al. [H. Medwin et al., J. Acoust. Soc. Am. 72, 1005–1013 (1982)], in that the concept of secondary edge sources is used. It is shown that analytical directivity functions for such edge sources can be derived and that they give the correct solution for the infinite wedge. These functions support the assumption for the first-order diffraction model suggested by Medwin et al. that the contributions to the impulse response from the two sides around the apex point are exactly identical. The analytical functions also indicate that Medwin’s second-order diffraction model contains approximations which, however, might be of minor importance for most geometries. Access to analytical directivity functions makes it possible to derive explicit expressions for the first- and even second-order diffraction for certain ...) <|cite_end|> and its extension to higher order diffraction
models <|cite_start|> (Reference: An integral equation formulation for the diffraction from convex plates and polyhedra: A formulation of the problem of scattering from obstacles with edges is presented. The formulation is based on decomposing the field into geometrical acoustics, first-order, and multiple-order edge diffraction components. An existing secondary-source model for edge diffraction from finite edges is extended to handle multiple diffraction of all orders. It is shown that the multiple-order diffraction component can be found via the solution to an integral equation formulated on pairs of edge points. This gives what can be called an edge source signal. In a subsequent step, this edge source signal is propagated to yield a multiple-order diffracted field, taking all diffraction orders into account. Numerical experiments demonstrate accurate response for frequencies down to 0 for thin plates and a cube. No problems with irregular frequencies, as happen with the Kirchhoff-Helmholtz integral equation, are observed for this formulation. For the axisymmetric scattering from a circular disc, a highly effective symmetric formulation results, and results agree with reference solutions across the entire frequency range.) <|cite_end|>.
Commonly used techniques to model diffraction with geometric acoustic methods are based on two models:
the Uniform Theory of Diffraction (UTD) <|cite_start|> (Reference: A uniform geometrical theory of diffraction for an edge in a perfectly conducting surface: A compact dyadic diffraction coefficient for electromagnetic waves obliquely incident on a curved edse formed by perfectly conducting curved ot plane surfaces is obtained. This diffraction coefficient remains valid in the transition regions adjacent to shadow and reflection boundaries, where the diffraction coefficients of Keller's original theory fail. Our method is based on Keller's method of the canonical problem, which in this case is the perfectly conducting wedge illuminated by plane, cylindrical, conical, and spherical waves. When the proper ray-fixed coordinate system is introduced, the dyadic diffraction coefficient for the wedge is found to be the sum of only two dyads, and it is shown that this is also true for the dyadic diffraction coefficients of higher order edges. One dyad contains the acoustic soft diffraction coefficient; the other dyad contains the acoustic hard diffraction coefficient. The expressions for the acoustic wedge diffraction coefficients contain Fresenel integrals, which ensure that the total field is continuous at shadow and reflection boundaries. The diffraction coefficients have the same form for the different types of edge illumination; only the arguments of the Fresnel integrals are different. Since diffraction is a local phenomenon, and locally the curved edge structure is wedge shaped, this result is readily extended to the curved wedge. It is interesting that even though the polarizations and the wavefront curvatures of the incident, reflected, and diffracted waves are markedly different, the total field calculated from this high-frequency solution for the curved wedge is continuous at shadow and reflection boundaries.) <|cite_end|> and the
Biot-Tolstoy-Medwin (BTM) model <|cite_start|> (Reference: An analytic secondary source model of edge diffraction impulse responses: A new impulse-response model for the edge diffraction from finite rigid or soft wedges is presented which is based on the exact Biot–Tolstoy solution. The new model is an extension of the work by Medwin et al. [H. Medwin et al., J. Acoust. Soc. Am. 72, 1005–1013 (1982)], in that the concept of secondary edge sources is used. It is shown that analytical directivity functions for such edge sources can be derived and that they give the correct solution for the infinite wedge. These functions support the assumption for the first-order diffraction model suggested by Medwin et al. that the contributions to the impulse response from the two sides around the apex point are exactly identical. The analytical functions also indicate that Medwin’s second-order diffraction model contains approximations which, however, might be of minor importance for most geometries. Access to analytical directivity functions makes it possible to derive explicit expressions for the first- and even second-order diffraction for certain ...) <|cite_end|>. The BTM model is
an accurate diffraction formulation that computes an integral of the
diffracted sound along the finite edges in the time
domain <|cite_start|> (Reference: An integral equation formulation for the diffraction from convex plates and polyhedra: A formulation of the problem of scattering from obstacles with edges is presented. The formulation is based on decomposing the field into geometrical acoustics, first-order, and multiple-order edge diffraction components. An existing secondary-source model for edge diffraction from finite edges is extended to handle multiple diffraction of all orders. It is shown that the multiple-order diffraction component can be found via the solution to an integral equation formulated on pairs of edge points. This gives what can be called an edge source signal. In a subsequent step, this edge source signal is propagated to yield a multiple-order diffracted field, taking all diffraction orders into account. Numerical experiments demonstrate accurate response for frequencies down to 0 for thin plates and a cube. No problems with irregular frequencies, as happen with the Kirchhoff-Helmholtz integral equation, are observed for this formulation. For the axisymmetric scattering from a circular disc, a highly effective symmetric formulation results, and results agree with reference solutions across the entire frequency range.) <|cite_end|> <|cite_start|> (Reference: A hybrid method combining the edge source integral equation and the boundary element method for scattering problems: A hybrid method for acoustic scattering problems is studied in this paper. The boundary element method is combined with a recently developed edge diffraction based method [J. Acoust. Soc. Am. 133, pp. 3681-3691, 2013]. Although the edge diffraction method has been shown to provide accurate results for convex, rigid objects at a very attractive computational cost, it has some numerical challenges for certain radiation directions. The hybrid method suggested here has the same structure as the boundary element method (BEM): a first step where the sound pressure is calculated on the surface of the scattering object, and a second step where the scattered sound is obtained at any external receiver point. In this method, the edge diffraction based method is used for the first step and then the calculation of the scattered sound is calculated a la BEM by means of the Kirchhoff - Helmholtz Integral equation. Several benchmark cases are studied and the results are compared to different methods.) <|cite_end|> <|cite_start|> (Reference: Efficient finite-edge diffraction using conservative from-region visibility: ) <|cite_end|>.
In practice, the BTM model is more accurate, but is limited to non-interactive applications.
The UTD model approximates an infinite wedge as a secondary source of
diffracted sounds, which can be reflected and diffracted again before reaching
the listener. UTD based approaches have been effective for
many real-time sound generation applications, especially in complex
environments with occluding
objects <|cite_start|> (Reference: Modeling Acoustics in Virtual Environments Using the Uniform Theory of Diffraction: Realistic modeling of reverberant sound in 3D virtual worlds provides users with important cues for localizing sound sources and understanding spatial properties of the environment. Unfortunately, current geometric acoustic modeling systems do not accurately simulate reverberant sound. Instead, they model only direct transmission and specular reflection, while diffraction is either ignored or modeled through statistical approximation. However, diffraction is important for correct interpretation of acoustic environments, especially when the direct path between sound source and receiver is occluded. The Uniform Theory of Diffraction (UTD) extends geometrical acoustics with diffraction phenomena: illuminated edges become secondary sources of diffracted rays that in turn may propagate through the environment. In this paper, we propose an efficient way for computing the acoustical effect of diffraction paths using the UTD for deriving secondary diffracted rays and associated diffraction coefficients. Our main contributions are: 1) a beam tracing method for enumerating sequences of diffracting edges efficiently and without aliasing in densely occluded polyhedral environments; 2) a practical approximation to the simulated sound field in which diffraction is considered only in shadow regions; and 3) a real-time auralization system demonstrating that diffraction dramatically improves the quality of spatialized sound in virtual environments.) <|cite_end|> <|cite_start|> (Reference: High-order Diffraction and Diffuse Reflections for Interactive Sound Propagation in Large Environments: We present novel algorithms for modeling interactive diffuse reflections and higher-order diffraction in large-scale virtual environments. Our formulation is based on ray-based sound propagation and is directly applicable to complex geometric datasets. We use an incremental approach that combines radiosity and path tracing techniques to iteratively compute diffuse reflections. We also present algorithms for wavelength-dependent simplification and visibility graph computation to accelerate higher-order diffraction at runtime. The overall system can generate plausible sound effects at interactive rates in large, dynamic scenes that have multiple sound sources. We highlight the performance in complex indoor and outdoor environments and observe an order of magnitude performance improvement over previous methods.) <|cite_end|>. Our approach is motivated by these real-time simulation and proposes a real-time source localization algorithm using UTD.
~\Skip{
Commonly used techniques to handle the diffraction are based on the
uniform theory of diffraction (UTD) <|cite_start|> (Reference: A uniform geometrical theory of diffraction for an edge in a perfectly conducting surface: A compact dyadic diffraction coefficient for electromagnetic waves obliquely incident on a curved edse formed by perfectly conducting curved ot plane surfaces is obtained. This diffraction coefficient remains valid in the transition regions adjacent to shadow and reflection boundaries, where the diffraction coefficients of Keller's original theory fail. Our method is based on Keller's method of the canonical problem, which in this case is the perfectly conducting wedge illuminated by plane, cylindrical, conical, and spherical waves. When the proper ray-fixed coordinate system is introduced, the dyadic diffraction coefficient for the wedge is found to be the sum of only two dyads, and it is shown that this is also true for the dyadic diffraction coefficients of higher order edges. One dyad contains the acoustic soft diffraction coefficient; the other dyad contains the acoustic hard diffraction coefficient. The expressions for the acoustic wedge diffraction coefficients contain Fresenel integrals, which ensure that the total field is continuous at shadow and reflection boundaries. The diffraction coefficients have the same form for the different types of edge illumination; only the arguments of the Fresnel integrals are different. Since diffraction is a local phenomenon, and locally the curved edge structure is wedge shaped, this result is readily extended to the curved wedge. It is interesting that even though the polarizations and the wavefront curvatures of the incident, reflected, and diffracted waves are markedly different, the total field calculated from this high-frequency solution for the curved wedge is continuous at shadow and reflection boundaries.) <|cite_end|>; UTD is the
approximation technique that simplifies the diffraction model
at the edge of a sharp object.
\JW{(the following sentence cannot be the reason of the former sentence. Check the logical flow.)}\IK{Addressed}
UTD based approaches have been demonstrated to be effective for many real-time sound generation applications, especially in complex environments with occluding objects <|cite_start|> (Reference: Modeling Acoustics in Virtual Environments Using the Uniform Theory of Diffraction: Realistic modeling of reverberant sound in 3D virtual worlds provides users with important cues for localizing sound sources and understanding spatial properties of the environment. Unfortunately, current geometric acoustic modeling systems do not accurately simulate reverberant sound. Instead, they model only direct transmission and specular reflection, while diffraction is either ignored or modeled through statistical approximation. However, diffraction is important for correct interpretation of acoustic environments, especially when the direct path between sound source and receiver is occluded. The Uniform Theory of Diffraction (UTD) extends geometrical acoustics with diffraction phenomena: illuminated edges become secondary sources of diffracted rays that in turn may propagate through the environment. In this paper, we propose an efficient way for computing the acoustical effect of diffraction paths using the UTD for deriving secondary diffracted rays and associated diffraction coefficients. Our main contributions are: 1) a beam tracing method for enumerating sequences of diffracting edges efficiently and without aliasing in densely occluded polyhedral environments; 2) a practical approximation to the simulated sound field in which diffraction is considered only in shadow regions; and 3) a real-time auralization system demonstrating that diffraction dramatically improves the quality of spatialized sound in virtual environments.) <|cite_end|> <|cite_start|> (Reference: High-order Diffraction and Diffuse Reflections for Interactive Sound Propagation in Large Environments: We present novel algorithms for modeling interactive diffuse reflections and higher-order diffraction in large-scale virtual environments. Our formulation is based on ray-based sound propagation and is directly applicable to complex geometric datasets. We use an incremental approach that combines radiosity and path tracing techniques to iteratively compute diffuse reflections. We also present algorithms for wavelength-dependent simplification and visibility graph computation to accelerate higher-order diffraction at runtime. The overall system can generate plausible sound effects at interactive rates in large, dynamic scenes that have multiple sound sources. We highlight the performance in complex indoor and outdoor environments and observe an order of magnitude performance improvement over previous methods.) <|cite_end|>.
Encouraged by the success in that field, we adopt the UTD algorithm for synthesizing accurate impulse responses
and use them to track the position of a sound source through the inverse propagation of recorded microphone signals.
\IK{Addressed all of comments}
}
\label{sec:3}
In this section, we explain a ray tracing based SSL method that our work is built upon, and motivate our work supporting the diffraction effect.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\columnwidth]{figures/5_computing_distance_weight.png}
\caption{
This figure shows an example of computing distance weights for particles $x_t^1$, $x_t^2$, and $x_t^3$ against a ray path, $R_n = [...,r_n^{k-1}, r_n^{k},...]$.
The chosen representative distance of each particle is shown in the red color where selected perpendicular foot are $\Pi_n^1$, $\Pi_n^2$, and $\Pi_n^3$.
}
\vspace{-1em}
\label{fig:computing_distance_weight}
\end{figure}
\paragraph{Reflection-aware SSL}
Our work is built upon the reflection-aware
SSL <|cite_start|> (Reference: Reflection-Aware Sound Source Localization: We present a novel, reflection-aware method for 3D sound localization in indoor environments. Unlike prior approaches, which are mainly based on continuous sound signals from a stationary source, our formulation is designed to localize the position instantaneously from signals within a single frame. We consider direct sound and indirect sound signals that reach the microphones after reflecting off surfaces such as ceilings or walls. We then generate and trace direct and reflected acoustic paths using inverse acoustic ray tracing and utilize these paths with Monte Carlo localization to estimate a 3D sound source position. We have implemented our method on a robot with a cube-shaped microphone array and tested it against different settings with continuous and intermittent sound signals with a stationary or a mobile source. Across different settings, our approach can localize the sound with an average distance error of 0.8m tested in a room of 7m by 7m area with 3m height, including a mobile and non-line-of-sight sound source. We also reveal that the modeling of indirect rays increases the localization accuracy by 40% compared to only using direct acoustic rays.) <|cite_end|>,
which is based on the ray-tracing technique.
It consists of two parts: performing acoustic ray tracing considering the reflection and
estimating a converging region for computing a sound source position from
those generated acoustic
rays based on
Monte-Carlo localization (MCL). Initially, we collect directions of the
incoming sound signals via a TDOA algorithm <|cite_start|> (Reference: Robust Localization and Tracking of Simultaneous Moving Sound Sources Using Beamforming and Particle Filtering: Mobile robots in real-life settings would benefit from being able to localize and track sound sources. Such a capability can help localizing a person or an interesting event in the environment, and also provides enhanced processing for other capabilities such as speech recognition. To give this capability to a robot, the challenge is not only to localize simultaneous sound sources, but to track them over time. In this paper we propose a robust sound source localization and tracking method using an array of eight microphones. The method is based on a frequency-domain implementation of a steered beamformer along with a particle filter-based tracking algorithm. Results show that a mobile robot can localize and track in real-time multiple moving sources of different types over a range of 7 meters. These new capabilities allow a mobile robot to interact using more natural means with people in real life settings.) <|cite_end|>
and generate acoustic
rays in the opposite to those collected directions. Although
we cannot determine whether the collected directions comes from the reflection
acoustic path or the direct path, the reflected rays are recursively generated
when any ray collides with the obstacles in the environment. Note that the
reflected acoustic ray is generated with the assumption of the specular
reflection.
This work assumes that only a single sound source exists within one frame in
an indoor environment and all sound signals are generated from that sound
source. As a result, it indicates that we can treat
a converging region from those acoustic rays as as the estimated location of the sound source.
To identify such a converging region, we utilize MCL with randomly generated
particles as hypothetical locations of the sound source and define the
likelihood of particles.
The likelihood
of an $i$-th particle, $x_t^i$,
becomes higher as the particle gets closer to any
acoustic ray, $r_n^k$ representing a $n$-th ray with $k$-th order reflection. This is realized by the following, distance weight function:
\begin{equation}
\begin{aligned}
w_d(x_t^{i}, r_n^k) &= {f_N}(\|x_t^{i} - \pi_i^k \|\ |\ 0, \sigma_d) \cdot {F}(x_t^{i}, r_n^k),
\end{aligned}
\label{eq:distance_weight}
\end{equation}
where
$\pi_i^k$ is the perpendicular foot from $x_t^i$ to $r_n^k$, $\|\cdot\|$
denotes the Euclidean distance, $f_N(\cdot)$ is the Gaussian function where the
mean is zero and the standard variance is $\sigma_d$, and $F(\cdot)$ is the
filter function returning zero to exclude irrelevant cases when the
perpendicular foot is outside of the ray segment $r_n^k$, e.g., $\pi_2^2$ in
Fig.\ref{fig:computing_distance_weight}.
This approach has been demonstrated to localize a moving source even if
the source
generates
intermittent sound signals and there is an obstacle blocking the
line-of-sight from the listener to the source.
\paragraph{Motivations.}
While the reflection-aware SSL approach works quite well in indoor scenes with
specular materials in some parts of the scenes, we found that its accuracy
deteriorates in complex environments containing many objects. Specifically, we
observed that in such cases with multiple objects, low-frequency effects such
as diffractions becomes an dominant issue.
In this work, we focus on considering the diffraction effect for robustly identifying the sound source location in scenes with many objects.
To achieve our goal, we propose a novel ray-tracing based sound source
localization algorithm using the UTD algorithm and inverse propagation signals
computed by the impulse responses of each acoustic ray. <|paper_end|> | [
"<|reference_start|> Human Identification and Localization by Robots in Collaborative Environments: <|reference_end|>",
"<|reference_start|> A methodology for sound source localization and tracking: Development of 3d microphone array for near-field and far-field applications: Acoustic source localization and tracking using microphone arrays has become a focus of interest in room acoustics, teleconference systems and tracking of sound producing objects. The current methods to estimate the source localization depend on conventional time-delay estimation techniques between microphone pairs, however, ignoring the ambient noise, reflections from surrounding and reverberation in the closed space. In this study, an acoustic source localizer and tracker (ASLT) based on 3D microphone array is designed and developed for real time source detection and localization. Two practical approaches were examined and evaluated, based on direction of arrival (DOA) estimation techniques and steered power response (SPR) of array algorithms in order to improve the accuracy for tracking multiple sources in full 3D coordinates. Among time delay estimation techniques, generalized cross correlation (GCC) is employed in frequency domain using multiple combinations of microphone pairs of the array with Phase transform (PHAT) weighting function for optimum detection of sources in the presence of reverberant environments. PHAT gives a good performance in the presence of ambience, even when the signal to noise ratio (SNR) is low. For SPR, minimum variance distortion less response (MVDR) beamformer weights are evaluated and purposed for accurate source tracking applications. A microphone array is designed using six transducers in spherical configurations and used for the evaluated for proposed methodology. The measurements are carried out in a reverberant chamber under different noise conditions to validate the practicality of the algorithmic chain and finally, the results are obtained and presented to demonstrate the efficiency of the proposed microphone array design and localization technique. <|reference_end|>",
"<|reference_start|> High-order Diffraction and Diffuse Reflections for Interactive Sound Propagation in Large Environments: We present novel algorithms for modeling interactive diffuse reflections and higher-order diffraction in large-scale virtual environments. Our formulation is based on ray-based sound propagation and is directly applicable to complex geometric datasets. We use an incremental approach that combines radiosity and path tracing techniques to iteratively compute diffuse reflections. We also present algorithms for wavelength-dependent simplification and visibility graph computation to accelerate higher-order diffraction at runtime. The overall system can generate plausible sound effects at interactive rates in large, dynamic scenes that have multiple sound sources. We highlight the performance in complex indoor and outdoor environments and observe an order of magnitude performance improvement over previous methods. <|reference_end|>",
"<|reference_start|> New higher-order boundary element methods for wave diffraction/radiation: <|reference_end|>"
] | [
1,
2,
18,
20
] | {"<|cite_1|>": "arxiv-140794", "<|multi_cite_2_1|>": "ss-1642659", "<|multi_cite_2_2|>": "ss-1642660", "<|multi_cite_3_1|>": "ss-1099110", "<|multi_cite_3_2|>": "arxiv-141835", "<|multi_cite_4_1|>": "ss-2279367", "<|multi_cite_4_2|>": "ss-2279368", "<|multi_cite_4_4|>": "arxiv-140794", "<|cite_5|>": "arxiv-140794", "<|cite_6|>": "ss-753299", "<|cite_7|>": "arxiv-140794", "<|cite_8|>": "ss-1099110", "<|cite_9|>": "arxiv-141835", "<|multi_cite_10_1|>": "ss-2279367", "<|multi_cite_10_2|>": "ss-2279368", "<|cite_11|>": "ss-2279367", "<|cite_12|>": "ss-2279368", "<|cite_14|>": "arxiv-140794", "<|cite_15|>": "ss-1888740", "<|cite_16|>": "ss-1542068", "<|multi_cite_17_1|>": "ss-1642661", "<|multi_cite_17_2|>": "ss-1642662", "<|cite_18|>": "ss-1962292", "<|cite_19|>": "ss-1642663", "<|cite_20|>": "ss-753300", "<|cite_21|>": "ss-1962292", "<|multi_cite_22_1|>": "ss-1642663", "<|multi_cite_22_2|>": "ss-1642662", "<|multi_cite_22_3|>": "ss-1642664", "<|multi_cite_23_1|>": "ss-1888739", "<|multi_cite_23_2|>": "ss-1888740", "<|cite_24|>": "ss-753300", "<|multi_cite_25_1|>": "ss-1888739", "<|multi_cite_25_2|>": "ss-1888740", "<|cite_26|>": "arxiv-140794", "<|cite_27|>": "arxiv-92942"} |
1801.02279-0 | <|paper_start|> Title: Identity-preserving Face Recovery from Portraits
Abstract: Identity-preserving Face Recovery from Portraits: Recovering the latent photorealistic faces from their artistic portraits aids human perception and facial analysis. However, a recovery process that can preserve identity is challenging because the fine details of real faces can be distorted or lost in stylized images. In this paper, we present a new Identity-preserving Face Recovery from Portraits (IFRP) to recover latent photorealistic faces from unaligned stylized portraits. Our IFRP method consists of two components: Style Removal Network (SRN) and Discriminative Network (DN). The SRN is designed to transfer feature maps of stylized images to the feature maps of the corresponding photorealistic faces. By embedding spatial transformer networks into the SRN, our method can compensate for misalignments of stylized faces automatically and output aligned realistic face images. The role of the DN is to enforce recovered faces to be similar to authentic faces. To ensure the identity preservation, we promote the recovered and ground-truth faces to share similar visual features via a distance measure which compares features of recovered and ground-truth faces extracted from a pre-trained VGG network. We evaluate our method on a large-scale synthesized dataset of real and stylized face pairs and attain state of the art results. In addition, our method can recover photorealistic faces from previously unseen stylized portraits, original paintings and human-drawn sketches.
Introduction
\label{sec:introduction}
A variety of style transfer methods have been proposed to generate portraits in different artistic styles from photorealistic images. However, the recovery of photorealistic faces from artistic portraits has not been fully investigated yet.
In general, stylized face images contain various facial expressions, facial component distortions and misalignments. Therefore, landmark detectors often fail to localize facial landmarks accurately as shown in Figures \ref{fig:openc} and \ref{fig:openg}. Thus, restoring identity-consistent photorealistic face images from unaligned stylized ones is challenging.
\begin{figure}[t]
\vspace{-0.3cm}
\begin{minipage}{0.18\linewidth}
\centering
\subfigure[Original]{\label{fig:opena}\scalebox{1}[1]{\includegraphics[width=1\linewidth]{figs/070284.jpg}}}
\end{minipage}
\hspace{-0.5em}
\begin{minipage}{0.82\linewidth}
\centering
\subfigure[Seen]{\label{fig:openb}\scalebox{1}[1]{\includegraphics[width=0.234\linewidth]{figs/Can_070284.jpg}}}
\subfigure[\scriptsize{Landmarks}]{\label{fig:openc}\scalebox{1}[1]{\includegraphics[width=0.234\linewidth]{figs/Can_070284_lm.jpg}}}
\subfigure[ <|cite_start|> (Reference: Perceptual Losses for Real-Time Style Transfer and Super-Resolution: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.) <|cite_end|>]{\label{fig:opend}\scalebox{1}[1]{\includegraphics[width=0.234\linewidth]{figs/Can_070284_Joh.jpg}}}
\subfigure[Ours]{\label{fig:opene}\scalebox{1}[1]{\includegraphics[width=0.234\linewidth]{figs/Can_070284_our.jpg}}}\\
\vspace{-0.8em}
\hspace{0.01em}
\subfigure[Unseen]{\label{fig:openf}\scalebox{1}[1]{\includegraphics[width=0.234\linewidth]{figs/Udn_070284.jpg}}}
\subfigure[\scriptsize{Landmarks}]{\label{fig:openg}\scalebox{1}[1]{\includegraphics[width=0.234\linewidth]{figs/Udn_070284_lm.jpg}}}
\subfigure[ <|cite_start|> (Reference: Perceptual Losses for Real-Time Style Transfer and Super-Resolution: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.) <|cite_end|>]{\label{fig:openh}\scalebox{1}[1]{\includegraphics[width=0.234\linewidth]{figs/Udn_070284_Joh.jpg}}}
\subfigure[Ours]{\label{fig:openi}\scalebox{1}[1]{\includegraphics[width=0.234\linewidth]{figs/Udn_070284_our.jpg}}}
\vspace{-0.4em}
\end{minipage}
\vspace{0.05cm}
\caption{Comparisons to the state-of-art method. (a) Ground-truth face image (from test dataset; not available in the training dataset). (b, f) Unaligned stylized portraits of (a) from \emph{Candy} style (seen/used style in training) and \emph{Udnie} style (unseen style in training), respectively. (c, g) Detected landmarks by <|cite_start|> (Reference: Facial Landmark Detection by Deep Multi-task Learning: ) <|cite_end|>. (d, h) Results obtained by <|cite_start|> (Reference: Perceptual Losses for Real-Time Style Transfer and Super-Resolution: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.) <|cite_end|>. (e, i) Our results.}
\label{fig:open}
\vspace{-0.4cm}
\end{figure}
While recovering photorealistic images from portraits is still uncommon in the literature, image stylization methods have been widely studied. Recently,
Gatys~\emph{et al.} <|cite_start|> (Reference: Controlling Perceptual Factors in Neural Style Transfer: Neural Style Transfer has shown very exciting results enabling new forms of image manipulation. Here we extend the existing method to introduce control over spatial location, colour information and across spatial scale. We demonstrate how this enhances the method by allowing high-resolution controlled stylisation and helps to alleviate common failure cases such as applying ground textures to sky regions. Furthermore, by decomposing style into these perceptual factors we enable the combination of style information from multiple sources to generate new, perceptually appealing styles from existing ones. We also describe how these methods can be used to more efficiently produce large size, high-quality stylisation. Finally we show how the introduced control measures can be applied in recent methods for Fast Neural Style Transfer.) <|cite_end|>achieve promising results by transferring different styles of artworks to images via the semantic contents space. Since this method generates the stylized images by iteratively updating the feature maps of CNNs, it requires costly computations. In order to reduce the computational complexity, several feed-forward CNN based methods have been proposed <|cite_start|> (Reference: Texture Networks: Feed-forward Synthesis of Textures and Stylized Images: Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys~et~al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.) <|cite_end|> <|cite_start|> (Reference: Instance Normalization: The Missing Ingredient for Fast Stylization: It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at https://github.com/DmitryUlyanov/texture_nets. Full paper can be found at arXiv:1701.02096.) <|cite_end|> <|cite_start|> (Reference: Perceptual Losses for Real-Time Style Transfer and Super-Resolution: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.) <|cite_end|> <|cite_start|> (Reference: A Learned Representation For Artistic Style: The diversity of painting styles represents a rich visual vocabulary for the construction of an image. The degree to which one may learn and parsimoniously capture this visual vocabulary measures our understanding of the higher level features of paintings, if not images in general. In this work we investigate the construction of a single, scalable deep network that can parsimoniously capture the artistic style of a diversity of paintings. We demonstrate that such a network generalizes across a diversity of artistic styles by reducing a painting to a point in an embedding space. Importantly, this model permits a user to explore new painting styles by arbitrarily combining the styles learned from individual paintings. We hope that this work provides a useful step towards building rich models of paintings and offers a window on to the structure of the learned representation of artistic style.) <|cite_end|> <|cite_start|> (Reference: Diversified Texture Synthesis with Feed-forward Networks: Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization.) <|cite_end|> <|cite_start|> (Reference: Fast Patch-based Style Transfer of Arbitrary Style: Artistic style transfer is an image synthesis problem where the content of an image is reproduced with the style of another. Recent works show that a visually appealing style transfer can be achieved by using the hidden activations of a pretrained convolutional neural network. However, existing methods either apply (i) an optimization procedure that works for any style image but is very expensive, or (ii) an efficient feedforward network that only allows a limited number of trained styles. In this work we propose a simpler optimization objective based on local matching that combines the content structure and style textures in a single layer of the pretrained network. We show that our objective has desirable properties such as a simpler optimization landscape, intuitive parameter tuning, and consistent frame-by-frame performance on video. Furthermore, we use 80,000 natural images and 80,000 paintings to train an inverse network that approximates the result of the optimization. This results in a procedure for artistic style transfer that is efficient but also allows arbitrary content and style images.) <|cite_end|> <|cite_start|> (Reference: Multi-style Generative Network for Real-time Transfer: Despite the rapid progress in style transfer, existing approaches using feed-forward generative network for multi-style or arbitrary-style transfer are usually compromised of image quality and model flexibility. We find it is fundamentally difficult to achieve comprehensive style modeling using 1-dimensional style embedding. Motivated by this, we introduce CoMatch Layer that learns to match the second order feature statistics with the target styles. With the CoMatch Layer, we build a Multi-style Generative Network (MSG-Net), which achieves real-time performance. We also employ an specific strategy of upsampled convolution which avoids checkerboard artifacts caused by fractionally-strided convolution. Our method has achieved superior image quality comparing to state-of-the-art approaches. The proposed MSG-Net as a general approach for real-time style transfer is compatible with most existing techniques including content-style interpolation, color-preserving, spatial control and brush stroke size control. MSG-Net is the first to achieve real-time brush-size control in a purely feed-forward manner for style transfer. Our implementations and pre-trained models for Torch, PyTorch and MXNet frameworks will be publicly available.) <|cite_end|> <|cite_start|> (Reference: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization: Gatys et al. recently introduced a neural algorithm that renders a content image in the style of another image, achieving so-called style transfer. However, their framework requires a slow iterative optimization process, which limits its practical application. Fast approximations with feed-forward neural networks have been proposed to speed up neural style transfer. Unfortunately, the speed improvement comes at a cost: the network is usually tied to a fixed set of styles and cannot adapt to arbitrary new styles. In this paper, we present a simple yet effective approach that for the first time enables arbitrary style transfer in real-time. At the heart of our method is a novel adaptive instance normalization (AdaIN) layer that aligns the mean and variance of the content features with those of the style features. Our method achieves speed comparable to the fastest existing approach, without the restriction to a pre-defined set of styles. In addition, our approach allows flexible user controls such as content-style trade-off, style interpolation, color & spatial controls, all using a single feed-forward neural network.) <|cite_end|>.
However, these methods can use only a single style fixed during the training phase. Such methods are insufficient for generating photorealistic face images, as shown in Figures~\ref{fig:opend} and \ref{fig:openh}, because they only capture the correlations of feature maps by the use of Gram matrices and discard spatial relations <|cite_start|> (Reference: {Higher-Order Occurrence Pooling for Bags-Of-Words: Visual Concept Detection: In object recognition, the Bag-of-Words model assumes: i) extraction of local descriptors from images, ii) embedding the descriptors by a coder to a given visual vocabulary space which results in mid-level features, iii) extracting statistics from mid-level features with a pooling operator that aggregates occurrences of visual words in images into signatures, which we refer to as First-order Occurrence Pooling. This paper investigates higher-order pooling that aggregates over co-occurrences of visual words. We derive Bag-of-Words with Higher-order Occurrence Pooling based on linearisation of Minor Polynomial Kernel, and extend this model to work with various pooling operators. This approach is then effectively used for fusion of various descriptor types. Moreover, we introduce Higher-order Occurrence Pooling performed directly on local image descriptors as well as a novel pooling operator that reduces the correlation in the image signatures. Finally, First-, Second-, and Third-order Occurrence Pooling are evaluated given various coders and pooling operators on several widely used benchmarks. The proposed methods are compared to other approaches such as Fisher Vector Encoding and demonstrate improved results.) <|cite_end|> <|cite_start|> (Reference: Domain Adaptation by Mixture of Alignments of Second- or Higher-Order Scatter Tensors: In this paper, we propose an approach to the domain adaptation, dubbed Second- or Higher-order Transfer of Knowledge (So-HoT), based on the mixture of alignments of second- or higher-order scatter statistics between the source and target domains. The human ability to learn from few labeled samples is a recurring motivation in the literature for domain adaptation. Towards this end, we investigate the supervised target scenario for which few labeled target training samples per category exist. Specifically, we utilize two CNN streams: the source and target networks fused at the classifier level. Features from the fully connected layers fc7 of each network are used to compute second- or even higher-order scatter tensors; one per network stream per class. As the source and target distributions are somewhat different despite being related, we align the scatters of the two network streams of the same class (within-class scatters) to a desired degree with our bespoke loss while maintaining good separation of the between-class scatters. We train the entire network in end-to-end fashion. We provide evaluations on the standard Office benchmark (visual domains), RGB-D combined with Caltech256 (depth-to-rgb transfer) and Pascal VOC2007 combined with the TU Berlin dataset (image-to-sketch transfer). We attain state-of-the-art results.) <|cite_end|> <|cite_start|> (Reference: Sparse Coding for Third-order Super-symmetric Tensor Descriptors with Application to Texture Recognition: Super-symmetric tensors - a higher-order extension of scatter matrices - are becoming increasingly popular in machine learning and computer vision for modeling data statistics, co-occurrences, or even as visual descriptors. They were shown recently to outperform second-order approaches, however, the size of these tensors are exponential in the data dimensionality, which is a significant concern. In this paper, we study third-order supersymmetric tensor descriptors in the context of dictionary learning and sparse coding. For this purpose, we propose a novel non-linear third-order texture descriptor. Our goal is to approximate these tensors as sparse conic combinations of atoms from a learned dictionary. Apart from the significant benefits to tensor compression that this framework offers, our experiments demonstrate that the sparse coefficients produced by this scheme lead to better aggregation of high-dimensional data and showcase superior performance on two common computer vision tasks compared to the state of the art.) <|cite_end|>.
In order to capture spatially localized statistics of a style image, several patch-based methods <|cite_start|> (Reference: Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks: This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.) <|cite_end|> <|cite_start|> (Reference: Image-to-Image Translation with Conditional Adversarial Networks: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.) <|cite_end|>have been developed. However, such methods cannot capture the global structure of faces either, thus failing to generate authentic face images. For instance, patch-based methods <|cite_start|> (Reference: Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks: This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.) <|cite_end|> <|cite_start|> (Reference: Image-to-Image Translation with Conditional Adversarial Networks: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.) <|cite_end|>fail to attain consistency of face colors, as shown in Figure~\ref{fig:cmp2e}. Furthermore, the state-of-the-art style transfer methods <|cite_start|> (Reference: Controlling Perceptual Factors in Neural Style Transfer: Neural Style Transfer has shown very exciting results enabling new forms of image manipulation. Here we extend the existing method to introduce control over spatial location, colour information and across spatial scale. We demonstrate how this enhances the method by allowing high-resolution controlled stylisation and helps to alleviate common failure cases such as applying ground textures to sky regions. Furthermore, by decomposing style into these perceptual factors we enable the combination of style information from multiple sources to generate new, perceptually appealing styles from existing ones. We also describe how these methods can be used to more efficiently produce large size, high-quality stylisation. Finally we show how the introduced control measures can be applied in recent methods for Fast Neural Style Transfer.) <|cite_end|> <|cite_start|> (Reference: Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks: This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.) <|cite_end|> <|cite_start|> (Reference: Texture Networks: Feed-forward Synthesis of Textures and Stylized Images: Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys~et~al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.) <|cite_end|> <|cite_start|> (Reference: Perceptual Losses for Real-Time Style Transfer and Super-Resolution: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.) <|cite_end|>transfer the desired styles to the given images without considering the task of identity preservation. Hence, previous methods cannot generate real faces while preserving identity.
In this paper, we develop a novel end-to-end trainable identity-preserving approach to face recovery that automatically maps the unaligned stylized portraits to aligned photorealistic face images.
Our network employs two subnetworks: a generative subnetwork, dubbed Style Removal Network (SRN), and a Discriminative Network (DN).
The SRN consists of an autoencoder (a downsampling encoder and an upsampling decoder) and Spatial Transfer Networks (STN) <|cite_start|> (Reference: Spatial Transformer Networks: Convolutional Neural Networks define an exceptionally powerful class of models, but are still limited by the lack of ability to be spatially invariant to the input data in a computationally and parameter efficient manner. In this work we introduce a new learnable module, the Spatial Transformer, which explicitly allows the spatial manipulation of data within the network. This differentiable module can be inserted into existing convolutional architectures, giving neural networks the ability to actively spatially transform feature maps, conditional on the feature map itself, without any extra training supervision or modification to the optimisation process. We show that the use of spatial transformers results in models which learn invariance to translation, scale, rotation and more generic warping, resulting in state-of-the-art performance on several benchmarks, and for a number of classes of transformations.) <|cite_end|>.
Th encoder extracts facial components from unaligned stylized face images and transfer the extracted feature maps to the domain of photorealistic images. Subsequently,
our decoder forms face images. STN layers are used by the encoder and decoder to align stylized faces.
The discriminative network, inspired by <|cite_start|> (Reference: Generative Adversarial Networks: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.) <|cite_end|> <|cite_start|> (Reference: Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks: In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (Goodfellow et al.). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40% of the time, compared to 10% for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.) <|cite_end|> <|cite_start|> (Reference: Ultra-Resolving Face Images by Discriminative Generative Networks: ) <|cite_end|> <|cite_start|> (Reference: {Face hallucination with tiny unaligned images by transformative discriminative neural networks: Conventional face hallucination methods rely heavily on accurate alignment of low-resolution (LR) faces before upsampling them. Misalignment often leads to deficient results and unnatural artifacts for large upscaling factors. However, due to the diverse range of poses and different facial expressions, aligning an LR input image, in particular when it is tiny, is severely difficult. To overcome this challenge, here we present an end-to-end transformative discriminative neural network (TDN) devised for super-resolving unaligned and very small face images with an extreme upscaling factor of 8. Our method employs an upsampling network where we embed spatial transformation layers to allow local receptive fields to line-up with similar spatial supports. Furthermore, we incorporate a class-specific loss in our objective through a successive discriminative network to improve the alignment and upsampling performance with semantic information. Extensive experiments on large face datasets show that the proposed method significantly outperforms the state-of-the-art.) <|cite_end|>, forces SRN to generate destylized faces to be similar to authentic ground-truth faces.
Moreover, as we aim to preserve the facial identity information, we constrain the recovered faces to have the same CNN feature representations as the ground-truth real faces. For this purpose, we employ pixel-level Euclidean and identity-preserving loss functions to guarantee the appearance- and identity-wise similarity to the ground-truth data. We also use an adversarial loss to achieve high-quality visual results.
To train our network, we require pairs of Stylized Face (SF) and ground-truth Real Face (RF) images. Therefore, we synthesize a large-scale dataset of SF/RF pairs. We observe that our CNN filters learned on images of seen styles (used for training) can extract meaningful features from images in unseen styles. Thus, the facial information of unseen stylized portraits can be extracted and used to generate photorealistic faces, as shown in the experimental section.
The main contributions of our work are fourfold:
\setlist[enumerate,1]{label={(\roman*)}}
\vspace{0.2cm}
\begin{enumerate}[topsep=0pt,itemsep=0.1pt,leftmargin=16pt]
\item We propose an IFRP approach that can recover photorealistic faces from unaligned stylized portraits. Our method generates facial identities and expressions that match the ground-truth face images well.
\item We use STNs as intermediate layers to compensate for misalignments of input portraits. Thus, our method does not require the use of facial landmarks or 3D face models (typically used for face alignment).
\item We fuse an identity-preserving loss, a pixel-wise similarity loss and an adversarial loss to remove seen/unseen styles from portraits and recover the underlying identity.
\item As large-scale datasets of stylized and photorealistic face pairs are not available, we synthesize a large dataset of pairs of stylized and photorealistic faces, which will be available on-line.
\end{enumerate}\vspace{0.2cm}
To the best of our knowledge, our method is the first attempt to provide a unified approach to the automated style removal of unaligned stylized portraits.
Related Work
\label{sec:Related Work}
In this section, we briefly review neural generative models and deep style transfer methods for image generation.
\subsection{Neural Generative Models}
There exist many generative models for the problem of image generation <|cite_start|> (Reference: Pixel Recurrent Neural Networks: Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.) <|cite_end|> <|cite_start|> (Reference: Auto-Encoding Variational Bayes: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions are two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.) <|cite_end|> <|cite_start|> (Reference: Pixel Recurrent Neural Networks: Modeling the distribution of natural images is a landmark problem in unsupervised learning. This task requires an image model that is at once expressive, tractable and scalable. We present a deep neural network that sequentially predicts the pixels in an image along the two spatial dimensions. Our method models the discrete probability of the raw pixel values and encodes the complete set of dependencies in the image. Architectural novelties include fast two-dimensional recurrent layers and an effective use of residual connections in deep recurrent networks. We achieve log-likelihood scores on natural images that are considerably better than the previous state of the art. Our main results also provide benchmarks on the diverse ImageNet dataset. Samples generated from the model appear crisp, varied and globally coherent.) <|cite_end|> <|cite_start|> (Reference: Generative Adversarial Networks: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to 1/2 everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.) <|cite_end|> <|cite_start|> (Reference: Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks: In this paper we introduce a generative parametric model capable of producing high quality samples of natural images. Our approach uses a cascade of convolutional networks within a Laplacian pyramid framework to generate images in a coarse-to-fine fashion. At each level of the pyramid, a separate generative convnet model is trained using the Generative Adversarial Nets (GAN) approach (Goodfellow et al.). Samples drawn from our model are of significantly higher quality than alternate approaches. In a quantitative assessment by human evaluators, our CIFAR10 samples were mistaken for real images around 40% of the time, compared to 10% for samples drawn from a GAN baseline model. We also show samples from models trained on the higher resolution images of the LSUN scene dataset.) <|cite_end|> <|cite_start|> (Reference: Image De-raining Using a Conditional Generative Adversarial Network: Severe weather conditions such as rain and snow adversely affect the visual quality of images captured under such conditions thus rendering them useless for further usage and sharing. In addition, such degraded images drastically affect performance of vision systems. Hence, it is important to solve the problem of single image de-raining/de-snowing. However, this is a difficult problem to solve due to its inherent ill-posed nature. Existing approaches attempt to introduce prior information to convert it into a well-posed problem. In this paper, we investigate a new point of view in addressing the single image de-raining problem. Instead of focusing only on deciding what is a good prior or a good framework to achieve good quantitative and qualitative performance, we also ensure that the de-rained image itself does not degrade the performance of a given computer vision algorithm such as detection and classification. In other words, the de-rained result should be indistinguishable from its corresponding clear image to a given discriminator. This criterion can be directly incorporated into the optimization framework by using the recently introduced conditional generative adversarial networks (GANs). To minimize artifacts introduced by GANs and ensure better visual quality, a new refined loss function is introduced. Based on this, we propose a novel single image de-raining method called Image De-raining Conditional General Adversarial Network (ID-CGAN), which considers quantitative, visual and also discriminative performance into the objective function. Experiments evaluated on synthetic images and real images show that the proposed method outperforms many recent state-of-the-art single image de-raining methods in terms of quantitative and visual performance.) <|cite_end|> <|cite_start|> (Reference: Face Destylization: Numerous style transfer methods which produce artistic styles of portraits have been proposed to date. However, the inverse problem of converting the stylized portraits back into realistic faces is yet to be investigated thoroughly. Reverting an artistic portrait to its original photo-realistic face image has potential to facilitate human perception and identity analysis. In this paper, we propose a novel Face Destylization Neural Network (FDNN) to restore the latent photo-realistic faces from the stylized ones. We develop a Style Removal Network composed of convolutional, fully-connected and deconvolutional layers. The convolutional layers are designed to extract facial components from stylized face images. Consecutively, the fully-connected layer transfers the extracted feature maps of stylized images into the corresponding feature maps of real faces and the deconvolutional layers generate real faces from the transferred feature maps. To enforce the destylized faces to be similar to authentic face images, we employ a discriminative network, which consists of convolutional and fully connected layers. We demonstrate the effectiveness of our network by conducting experiments on an extensive set of synthetic images. Furthermore, we illustrate our network can recover faces from stylized portraits and real paintings for which the stylized data was unavailable during the training phase.) <|cite_end|>. Among them, GANs are conceptually closely related to our problem as they employ an adversarial loss that forces the generated images to be as photorealistic as the ground-truth images.
Several methods adopt an adversarial training to learn a parametric translating function from a large-scale dataset of input-output pairs, such as super-resolution <|cite_start|> (Reference: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network: Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.) <|cite_end|> <|cite_start|> (Reference: {Face hallucination with tiny unaligned images by transformative discriminative neural networks: Conventional face hallucination methods rely heavily on accurate alignment of low-resolution (LR) faces before upsampling them. Misalignment often leads to deficient results and unnatural artifacts for large upscaling factors. However, due to the diverse range of poses and different facial expressions, aligning an LR input image, in particular when it is tiny, is severely difficult. To overcome this challenge, here we present an end-to-end transformative discriminative neural network (TDN) devised for super-resolving unaligned and very small face images with an extreme upscaling factor of 8. Our method employs an upsampling network where we embed spatial transformation layers to allow local receptive fields to line-up with similar spatial supports. Furthermore, we incorporate a class-specific loss in our objective through a successive discriminative network to improve the alignment and upsampling performance with semantic information. Extensive experiments on large face datasets show that the proposed method significantly outperforms the state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Beyond Face Rotation: Global and Local Perception GAN for Photorealistic and Identity Preserving Frontal View Synthesis: Photorealistic frontal view synthesis from a single face image has a wide range of applications in the field of face recognition. Although data-driven deep learning methods have been proposed to address this problem by seeking solutions from ample face data, this problem is still challenging because it is intrinsically ill-posed. This paper proposes a Two-Pathway Generative Adversarial Network (TP-GAN) for photorealistic frontal view synthesis by simultaneously perceiving global structures and local details. Four landmark located patch networks are proposed to attend to local textures in addition to the commonly used global encoder-decoder network. Except for the novel architecture, we make this ill-posed problem well constrained by introducing a combination of adversarial loss, symmetry loss and identity preserving loss. The combined loss function leverages both frontal face distribution and pre-trained discriminative deep face models to guide an identity preserving inference of frontal views from profiles. Different from previous deep learning methods that mainly rely on intermediate features for recognition, our method directly leverages the synthesized identity preserving image for downstream tasks like face recognition and attribution estimation. Experimental results demonstrate that our method not only presents compelling perceptual results but also outperforms state-of-the-art results on large pose face recognition.) <|cite_end|> <|cite_start|> (Reference: Hallucinating very low-resolution unaligned and noisy face
images by transformative discriminative autoencoders: Most of the conventional face hallucination methods assume the input image is sufficiently large and aligned, and all require the input image to be noise-free. Their performance degrades drastically if the input image is tiny, unaligned, and contaminated by noise. In this paper, we introduce a novel transformative discriminative autoencoder to 8X super-resolve unaligned noisy and tiny (16X16) low-resolution face images. In contrast to encoder-decoder based autoencoders, our method uses decoder-encoder-decoder networks. We first employ a transformative discriminative decoder network to upsample and denoise simultaneously. Then we use a transformative encoder network to project the intermediate HR faces to aligned and noise-free LR faces. Finally, we use the second decoder to generate hallucinated HR images. Our extensive evaluations on a very large face dataset show that our method achieves superior hallucination results and outperforms the state-of-the-art by a large margin of 1.82dB PSNR.) <|cite_end|> <|cite_start|> (Reference: Ultra-Resolving Face Images by Discriminative Generative Networks: ) <|cite_end|>and inpainting <|cite_start|> (Reference: Context Encoders: Feature Learning by Inpainting: We present an unsupervised visual feature learning algorithm driven by context-based pixel prediction. By analogy with auto-encoders, we propose Context Encoders -- a convolutional neural network trained to generate the contents of an arbitrary image region conditioned on its surroundings. In order to succeed at this task, context encoders need to both understand the content of the entire image, as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experimented with both a standard pixel-wise reconstruction loss, as well as a reconstruction plus an adversarial loss. The latter produces much sharper results because it can better handle multiple modes in the output. We found that a context encoder learns a representation that captures not just appearance but also the semantics of visual structures. We quantitatively demonstrate the effectiveness of our learned features for CNN pre-training on classification, detection, and segmentation tasks. Furthermore, context encoders can be used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.) <|cite_end|>. These approaches often use the $\ell_2$ or $\ell_1$ norm and adversarial losses to compare the generated image to the corresponding ground truth image. Although these methods produce impressive photorealistic images, they fail to preserve identities of subjects.
Conditional GANs have been used for the task of generating photographs from sketches <|cite_start|> (Reference: Scribbler: Controlling Deep Image Synthesis with Sketch and Color: Recently, there have been several promising methods to generate realistic imagery from deep convolutional networks. These methods sidestep the traditional computer graphics rendering pipeline and instead generate imagery at the pixel level by learning from large collections of photos (e.g. faces or bedrooms). However, these methods are of limited utility because it is difficult for a user to control what the network produces. In this paper, we propose a deep adversarial image synthesis architecture that is conditioned on sketched boundaries and sparse color strokes to generate realistic cars, bedrooms, or faces. We demonstrate a sketch based image synthesis system which allows users to 'scribble' over the sketch to indicate preferred color for objects. Our network can then generate convincing images that satisfy both the color and the sketch constraints of user. The network is feed-forward which allows users to see the effect of their edits in real time. We compare to recent work on sketch to image synthesis and show that our approach can generate more realistic, more diverse, and more controllable outputs. The architecture is also effective at user-guided colorization of grayscale images.) <|cite_end|>, and from semantic layout and scene attributes <|cite_start|> (Reference: Learning to Generate Images of Outdoor Scenes from Attributes and Semantic Layouts: Automatic image synthesis research has been rapidly growing with deep networks getting more and more expressive. In the last couple of years, we have observed images of digits, indoor scenes, birds, chairs, etc. being automatically generated. The expressive power of image generators have also been enhanced by introducing several forms of conditioning variables such as object names, sentences, bounding box and key-point locations. In this work, we propose a novel deep conditional generative adversarial network architecture that takes its strength from the semantic layout and scene attributes integrated as conditioning variables. We show that our architecture is able to generate realistic outdoor scene images under different conditions, e.g. day-night, sunny-foggy, with clear object boundaries.) <|cite_end|>. Li and Wand <|cite_start|> (Reference: Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks: This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.) <|cite_end|>train a Markovian GAN for the style transfer -- a discriminative training is applied on Markovian neural patches to capture local style statistics.
Isola \emph{et al.} <|cite_start|> (Reference: Image-to-Image Translation with Conditional Adversarial Networks: We investigate conditional adversarial networks as a general-purpose solution to image-to-image translation problems. These networks not only learn the mapping from input image to output image, but also learn a loss function to train this mapping. This makes it possible to apply the same generic approach to problems that traditionally would require very different loss formulations. We demonstrate that this approach is effective at synthesizing photos from label maps, reconstructing objects from edge maps, and colorizing images, among other tasks. Indeed, since the release of the pix2pix software associated with this paper, a large number of internet users (many of them artists) have posted their own experiments with our system, further demonstrating its wide applicability and ease of adoption without the need for parameter tweaking. As a community, we no longer hand-engineer our mapping functions, and this work suggests we can achieve reasonable results without hand-engineering our loss functions either.) <|cite_end|>develop ``pix2pix'' framework which uses so-called ``Unet'' architecture and the patch-GAN to transfer low-level features from the input to the output domain. For faces, this approach produces visual artefacts and fails to capture the global structure of faces.
Patch-based methods fail to capture the global structure of faces and, as a result, they generate poor destylization results.
In contrast, we propose an identity-preserving loss to faithfully recover the most prominent details of faces.
Moreover, there exist several methods to synthesize sketches from photographs (and vice versa) <|cite_start|> (Reference: A study on recognizing non-artistic face sketches: Face sketches are being used in eyewitness testimonies for about a century. These sketches are crucial in finding suspects when no photo is available, but a mental image in the eyewitness's mind. However, research shows that current procedures used for eyewitness testimonies have two main problems. First, they can significantly disturb the memories of the eyewitness. Second, in many cases, these procedures result in face images far from their target faces. These two problems are related to the plasticity of the human visual system and the differences between face perception in humans (holistic) and current methods of sketch production (piecemeal). In this paper, we present some insights for more realistic sketch to photo matching. We describe how to retrieve identity specific information from crude sketches, directly drawn by the non-artistic eyewitnesses. The sketches we used merely contain facial component outlines and facial marks (e.g. wrinkles and moles). We compare results of automatically matching two types sketches (trace-over and user-provided, 25 each) to four types of faces (original, locally exaggerated, configurally exaggerated, and globally exaggerated, 249 each), using two methods (PDM distance comparison and PCA classification). Based on our results, we argue that for automatic non-artistic sketch to photo matching, the algorithms should compare the user-provided sketches with globally exaggerated faces, with a soft constraint on facial marks, to achieve the best matching rates. This is because the user-provided sketch from the user's mental image, seems to be caricatured both locally and configurally.) <|cite_end|> <|cite_start|> (Reference: Human face image searching system using sketches: This paper reports a human face image searching system using sketches. A two-phase method, namely, sketch-to-mug-shot matching and human face image searching using relevance feedback, is designed and developed. In the sketch-to-mug-shot matching phase, we have developed a facial feature matching algorithm using local and global features. A point distribution model is employed to represent local facial features while the global feature consists of a set of the geometrical relationship between facial features. It is found that the performance of the sketch-to-mug-shot matching is good if the sketch image looks like the mug shot image in the database. However, in some situations, it is hard to construct a sketch that looks like the photograph. To overcome this limitation, this paper makes use of the concept of ldquohuman-in-the-looprdquo and proposes a human face image searching algorithm using relevance feedback in the second phase. Positive and negative samples will be collected from the user. A feedback algorithm that employs subspace linear discriminant analysis for online learning of the optimal projection for face representation is then designed and developed. The proposed system has been evaluated using the FERET database and a Japanese database with hundreds of individuals. The results are encouraging.) <|cite_end|> <|cite_start|> (Reference: Face Sketch Synthesis and Recognition: We propose a novel face photo retrieval system using sketch drawings. By transforming a photo image into a sketch, we reduce the difference between photo and sketch significantly, thus allow effective matching between the two. To improve the synthesis performance, we separate shape and texture information in a face photo, and conduct transformation on them respectively. Finally a Bayesian classifier is used to recognize the probing sketch from the synthesized pseudo-sketches. Experiments on a data set containing 606 people clearly demonstrate the efficacy of the algorithm.) <|cite_end|> <|cite_start|> (Reference: Bypassing synthesis: Pls for face recognition with pose, low-resolution and sketch: This paper presents a novel way to perform multi-modal face recognition. We use Partial Least Squares (PLS) to linearly map images in different modalities to a common linear subspace in which they are highly correlated. PLS has been previously used effectively for feature selection in face recognition. We show both theoretically and experimentally that PLS can be used effectively across modalities. We also formulate a generic intermediate subspace comparison framework for multi-modal recognition. Surprisingly, we achieve high performance using only pixel intensities as features. We experimentally demonstrate the highest published recognition rates on the pose variations in the PIE data set, and also show that PLS can be used to compare sketches to photos, and to compare images taken at different resolutions.) <|cite_end|>. While sketch-to-face synthesis is a related problem, our unified framework can work with various more complex styles.
\subsection{Deep Style Transfer}
Style transfer is a technique which can render a given content image (input) by incorporating a specific painting style while preserving the contents of input.
We distinguish \emph{image optimization-based} and \emph{feed-forward} style transfer methods. The seminal optimization-based work <|cite_start|> (Reference: Image Style Transfer using Convolutional Neural Networks: Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.) <|cite_end|>transfers the style of an artistic image to a given photograph. It uses an iterative optimization to generate a target image which is randomly initialized (Gaussian distribution). During the optimization step, the statistics of the neural activations of the target, the content and style images are matched.
The idea <|cite_start|> (Reference: Image Style Transfer using Convolutional Neural Networks: Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.) <|cite_end|>inspired many follow-up studies.
Yin <|cite_start|> (Reference: Content Aware Neural Style Transfer: This paper presents a content-aware style transfer algorithm for paintings and photos of similar content using pre-trained neural network, obtaining better results than the previous work. In addition, the numerical experiments show that the style pattern and the content information is not completely separated by neural network.) <|cite_end|>presents a content-aware style transfer method which initializes the optimization algorithm with a content image instead of a random noise. Li and Wand <|cite_start|> (Reference: Combining Markov Random Fields and Convolutional Neural Networks for Image Synthesis: This paper studies a combination of generative Markov random field (MRF) models and discriminatively trained deep convolutional neural networks (dCNNs) for synthesizing 2D images. The generative MRF acts on higher-levels of a dCNN feature pyramid, controling the image layout at an abstract level. We apply the method to both photographic and non-photo-realistic (artwork) synthesis tasks. The MRF regularizer prevents over-excitation artifacts and reduces implausible feature mixtures common to previous dCNN inversion approaches, permitting synthezing photographic content with increased visual plausibility. Unlike standard MRF-based texture synthesis, the combined system can both match and adapt local features with considerable variability, yielding results far out of reach of classic generative MRF methods.) <|cite_end|>propose a patch-based style transfer method by combining Markov Random Field (MRF) and CNN techniques. The work <|cite_start|> (Reference: Preserving Color in Neural Artistic Style Transfer: This note presents an extension to the neural artistic style transfer algorithm (Gatys et al.). The original algorithm transforms an image to have the style of another given image. For example, a photograph can be transformed to have the style of a famous painting. Here we address a potential shortcoming of the original method: the algorithm transfers the colors of the original painting, which can alter the appearance of the scene in undesirable ways. We describe simple linear methods for transferring style while preserving colors.) <|cite_end|>proposes to transfer the style by using linear models. It preserves colors of content images by matching color histograms.
Gatys ~\emph{et al.} <|cite_start|> (Reference: Controlling Perceptual Factors in Neural Style Transfer: Neural Style Transfer has shown very exciting results enabling new forms of image manipulation. Here we extend the existing method to introduce control over spatial location, colour information and across spatial scale. We demonstrate how this enhances the method by allowing high-resolution controlled stylisation and helps to alleviate common failure cases such as applying ground textures to sky regions. Furthermore, by decomposing style into these perceptual factors we enable the combination of style information from multiple sources to generate new, perceptually appealing styles from existing ones. We also describe how these methods can be used to more efficiently produce large size, high-quality stylisation. Finally we show how the introduced control measures can be applied in recent methods for Fast Neural Style Transfer.) <|cite_end|>decompose styles into perceptual factors and then manipulate them for the style transfer. Selim~\emph{et al.} <|cite_start|> (Reference: Painting Style Transfer for Head Portraits Using Convolutional Neural Networks: Head portraits are popular in traditional painting. Automating portrait painting is challenging as the human visual system is sensitive to the slightest irregularities in human faces. Applying generic painting techniques often deforms facial structures. On the other hand portrait painting techniques are mainly designed for the graphite style and/or are based on image analogies; an example painting as well as its original unpainted version are required. This limits their domain of applicability. We present a new technique for transferring the painting from a head portrait onto another. Unlike previous work our technique only requires the example painting and is not restricted to a specific style. We impose novel spatial constraints by locally transferring the color distributions of the example painting. This better captures the painting texture and maintains the integrity of facial structures. We generate a solution through Convolutional Neural Networks and we present an extension to video. Here motion is exploited in a way to reduce temporal inconsistencies and the shower-door effect. Our approach transfers the painting style while maintaining the input photograph identity. In addition it significantly reduces facial deformations over state of the art.) <|cite_end|>modify the content loss through a gain map for the head portrait painting transfer.
Wilmot ~\emph{et al.} <|cite_start|> (Reference: Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses: Recently, methods have been proposed that perform texture synthesis and style transfer by using convolutional neural networks (e.g. Gatys et al. [2015,2016]). These methods are exciting because they can in some cases create results with state-of-the-art quality. However, in this paper, we show these methods also have limitations in texture quality, stability, requisite parameter tuning, and lack of user controls. This paper presents a multiscale synthesis pipeline based on convolutional neural networks that ameliorates these issues. We first give a mathematical explanation of the source of instabilities in many previous approaches. We then improve these instabilities by using histogram losses to synthesize textures that better statistically match the exemplar. We also show how to integrate localized style losses in our multiscale framework. These losses can improve the quality of large features, improve the separation of content and style, and offer artistic controls such as paint by numbers. We demonstrate that our approach offers improved quality, convergence in fewer iterations, and more stability over the optimization.) <|cite_end|>use histogram-based losses in their objective and build on the Gatys~\emph{et al.}'s algorithm <|cite_start|> (Reference: Image Style Transfer using Convolutional Neural Networks: Rendering the semantic content of an image in different styles is a difficult image processing task. Arguably, a major limiting factor for previous approaches has been the lack of image representations that explicitly represent semantic information and, thus, allow to separate image content from style. Here we use image representations derived from Convolutional Neural Networks optimised for object recognition, which make high level image information explicit. We introduce A Neural Algorithm of Artistic Style that can separate and recombine the image content and style of natural images. The algorithm allows us to produce new images of high perceptual quality that combine the content of an arbitrary photograph with the appearance of numerous wellknown artworks. Our results provide new insights into the deep image representations learned by Convolutional Neural Networks and demonstrate their potential for high level image synthesis and manipulation.) <|cite_end|>. Although the above optimization-based methods further improve the quality of style transfer, they are computationally expensive due to the iterative optimization procedure, thus limiting their practical use.
To address the poor computational speed, feed-forward methods replace the original on-line iterative optimization step with training a feed-forward neural network off-line and generating stylized images on-line <|cite_start|> (Reference: Texture Networks: Feed-forward Synthesis of Textures and Stylized Images: Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys~et~al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.) <|cite_end|> <|cite_start|> (Reference: Perceptual Losses for Real-Time Style Transfer and Super-Resolution: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.) <|cite_end|> <|cite_start|> (Reference: Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks: This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative neural networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required any longer at generation time, our run-time performance (0.25M pixel images at 25Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.) <|cite_end|>.
Johnson \emph{et al.} <|cite_start|> (Reference: Perceptual Losses for Real-Time Style Transfer and Super-Resolution: We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a \emph{per-pixel} loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing \emph{perceptual} loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.) <|cite_end|>train a generative network for a fast style transfer using perceptual loss functions.
The architecture of their generator network follows the work <|cite_start|> (Reference: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks: In recent years, supervised learning with convolutional networks (CNNs) has seen huge adoption in computer vision applications. Comparatively, unsupervised learning with CNNs has received less attention. In this work we hope to help bridge the gap between the success of CNNs for supervised learning and unsupervised learning. We introduce a class of CNNs called deep convolutional generative adversarial networks (DCGANs), that have certain architectural constraints, and demonstrate that they are a strong candidate for unsupervised learning. Training on various image datasets, we show convincing evidence that our deep convolutional adversarial pair learns a hierarchy of representations from object parts to scenes in both the generator and discriminator. Additionally, we use the learned features for novel tasks - demonstrating their applicability as general image representations.) <|cite_end|>and also uses residual blocks. Another concurrent work <|cite_start|> (Reference: Texture Networks: Feed-forward Synthesis of Textures and Stylized Images: Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys~et~al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.) <|cite_end|>, named Texture Network, employs a multi-resolution architecture in the generator network.
Ulyanov \emph{et al.} <|cite_start|> (Reference: Instance Normalization: The Missing Ingredient for Fast Stylization: It this paper we revisit the fast stylization method introduced in Ulyanov et. al. (2016). We show how a small change in the stylization architecture results in a significant qualitative improvement in the generated images. The change is limited to swapping batch normalization with instance normalization, and to apply the latter both at training and testing times. The resulting method can be used to train high-performance architectures for real-time image generation. The code will is made available on github at https://github.com/DmitryUlyanov/texture_nets. Full paper can be found at arXiv:1701.02096.) <|cite_end|> <|cite_start|> (Reference: Improved Texture Networks: Maximizing Quality and Diversity in Feed-forward Stylization and Texture Synthesis: The recent work of Gatys et al., who characterized the style of an image by the statistics of convolutional neural network filters, ignited a renewed interest in the texture generation and image stylization problems. While their image generation technique uses a slow optimization process, recently several authors have proposed to learn generator neural networks that can produce similar outputs in one quick forward pass. While generator networks are promising, they are still inferior in visual quality and diversity compared to generation-by-optimization. In this work, we advance them in two significant ways. First, we introduce an instance normalization module to replace batch normalization with significant improvements to the quality of image stylization. Second, we improve diversity by introducing a new learning formulation that encourages generators to sample unbiasedly from the Julesz texture ensemble, which is the equivalence class of all images characterized by certain filter responses. Together, these two improvements take feed forward texture synthesis and image stylization much closer to the quality of generation-via-optimization, while retaining the speed advantage.) <|cite_end|> | [
"<|reference_start|> Diversified Texture Synthesis with Feed-forward Networks: Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization. <|reference_end|>",
"<|reference_start|> Sparse Coding for Third-order Super-symmetric Tensor Descriptors with Application to Texture Recognition: Super-symmetric tensors - a higher-order extension of scatter matrices - are becoming increasingly popular in machine learning and computer vision for modeling data statistics, co-occurrences, or even as visual descriptors. They were shown recently to outperform second-order approaches, however, the size of these tensors are exponential in the data dimensionality, which is a significant concern. In this paper, we study third-order supersymmetric tensor descriptors in the context of dictionary learning and sparse coding. For this purpose, we propose a novel non-linear third-order texture descriptor. Our goal is to approximate these tensors as sparse conic combinations of atoms from a learned dictionary. Apart from the significant benefits to tensor compression that this framework offers, our experiments demonstrate that the sparse coefficients produced by this scheme lead to better aggregation of high-dimensional data and showcase superior performance on two common computer vision tasks compared to the state of the art. <|reference_end|>",
"<|reference_start|> Controlling Perceptual Factors in Neural Style Transfer: Neural Style Transfer has shown very exciting results enabling new forms of image manipulation. Here we extend the existing method to introduce control over spatial location, colour information and across spatial scale. We demonstrate how this enhances the method by allowing high-resolution controlled stylisation and helps to alleviate common failure cases such as applying ground textures to sky regions. Furthermore, by decomposing style into these perceptual factors we enable the combination of style information from multiple sources to generate new, perceptually appealing styles from existing ones. We also describe how these methods can be used to more efficiently produce large size, high-quality stylisation. Finally we show how the introduced control measures can be applied in recent methods for Fast Neural Style Transfer. <|reference_end|>",
"<|reference_start|> A study on recognizing non-artistic face sketches: Face sketches are being used in eyewitness testimonies for about a century. These sketches are crucial in finding suspects when no photo is available, but a mental image in the eyewitness's mind. However, research shows that current procedures used for eyewitness testimonies have two main problems. First, they can significantly disturb the memories of the eyewitness. Second, in many cases, these procedures result in face images far from their target faces. These two problems are related to the plasticity of the human visual system and the differences between face perception in humans (holistic) and current methods of sketch production (piecemeal). In this paper, we present some insights for more realistic sketch to photo matching. We describe how to retrieve identity specific information from crude sketches, directly drawn by the non-artistic eyewitnesses. The sketches we used merely contain facial component outlines and facial marks (e.g. wrinkles and moles). We compare results of automatically matching two types sketches (trace-over and user-provided, 25 each) to four types of faces (original, locally exaggerated, configurally exaggerated, and globally exaggerated, 249 each), using two methods (PDM distance comparison and PCA classification). Based on our results, we argue that for automatic non-artistic sketch to photo matching, the algorithms should compare the user-provided sketches with globally exaggerated faces, with a soft constraint on facial marks, to achieve the best matching rates. This is because the user-provided sketch from the user's mental image, seems to be caricatured both locally and configurally. <|reference_end|>"
] | [
9,
15,
20,
46
] | {"<|cite_1|>": "arxiv-94708", "<|cite_2|>": "arxiv-94708", "<|cite_3|>": "ss-684085", "<|cite_4|>": "arxiv-94708", "<|cite_5|>": "arxiv-110890", "<|multi_cite_6_1|>": "arxiv-93753", "<|multi_cite_6_2|>": "arxiv-102888", "<|multi_cite_6_3|>": "arxiv-94708", "<|multi_cite_6_4|>": "arxiv-108534", "<|multi_cite_6_5|>": "arxiv-118271", "<|multi_cite_6_6|>": "arxiv-112481", "<|multi_cite_6_7|>": "arxiv-119568", "<|multi_cite_6_8|>": "arxiv-119552", "<|multi_cite_7_1|>": "ss-1051560", "<|multi_cite_7_2|>": "arxiv-110946", "<|multi_cite_7_3|>": "ss-1060674", "<|multi_cite_8_1|>": "arxiv-96001", "<|multi_cite_8_2|>": "arxiv-110679", "<|multi_cite_9_1|>": "arxiv-96001", "<|multi_cite_9_2|>": "arxiv-110679", "<|multi_cite_10_1|>": "arxiv-110890", "<|multi_cite_10_2|>": "arxiv-96001", "<|multi_cite_10_3|>": "arxiv-93753", "<|multi_cite_10_4|>": "arxiv-94708", "<|cite_11|>": "arxiv-78899", "<|multi_cite_12_1|>": "arxiv-62064", "<|multi_cite_12_2|>": "arxiv-79663", "<|multi_cite_12_3|>": "ss-1260525", "<|multi_cite_12_4|>": "ss-1959346", "<|multi_cite_13_1|>": "arxiv-91001", "<|multi_cite_13_2|>": "arxiv-54350", "<|multi_cite_13_3|>": "arxiv-91001", "<|multi_cite_13_4|>": "arxiv-62064", "<|multi_cite_13_5|>": "arxiv-79663", "<|multi_cite_13_6|>": "arxiv-114814", "<|multi_cite_13_7|>": "arxiv-147224", "<|multi_cite_14_1|>": "arxiv-105885", "<|multi_cite_14_2|>": "ss-1959346", "<|multi_cite_14_3|>": "arxiv-121607", "<|multi_cite_14_4|>": "ss-1431274", "<|multi_cite_14_5|>": "ss-1260525", "<|cite_15|>": "arxiv-96683", "<|cite_16|>": "arxiv-111675", "<|cite_17|>": "arxiv-111535", "<|cite_18|>": "arxiv-96001", "<|cite_19|>": "arxiv-110679", "<|multi_cite_20_1|>": "ss-1632782", "<|multi_cite_20_2|>": "ss-1633466", "<|multi_cite_20_3|>": "ss-2451497", "<|multi_cite_20_4|>": "ss-1012426", "<|cite_21|>": "ss-683396", "<|cite_22|>": "ss-683396", "<|cite_23|>": "arxiv-90560", "<|cite_24|>": "arxiv-90567", "<|cite_25|>": "arxiv-100438", "<|cite_26|>": "arxiv-110890", "<|cite_27|>": "ss-1445990", "<|cite_28|>": "arxiv-115526", "<|cite_29|>": "ss-683396", "<|multi_cite_30_1|>": "arxiv-93753", "<|multi_cite_30_2|>": "arxiv-94708", "<|multi_cite_30_3|>": "arxiv-96001", "<|cite_31|>": "arxiv-94708", "<|cite_32|>": "arxiv-87648", "<|cite_33|>": "arxiv-93753", "<|multi_cite_34_1|>": "arxiv-102888", "<|multi_cite_34_2|>": "arxiv-113969", "<|cite_35|>": "arxiv-111967", "<|multi_cite_36_1|>": "arxiv-108534", "<|multi_cite_36_2|>": "arxiv-112481", "<|multi_cite_36_3|>": "arxiv-120134", "<|multi_cite_36_4|>": "arxiv-118271", "<|cite_37|>": "arxiv-108534", "<|cite_38|>": "arxiv-112481", "<|cite_39|>": "arxiv-120134", "<|cite_40|>": "arxiv-118271", "<|multi_cite_41_1|>": "arxiv-118271", "<|multi_cite_41_2|>": "arxiv-119552", "<|multi_cite_41_3|>": "arxiv-119568", "<|multi_cite_42_1|>": "arxiv-113969", "<|multi_cite_42_2|>": "arxiv-102888", "<|multi_cite_42_3|>": "arxiv-123332"} |
2002.11152-0 | <|paper_start|> Title: Fundamental Issues Regarding Uncertainties in Artificial Neural Networks
Abstract: Fundamental Issues Regarding Uncertainties in Artificial Neural Networks: Artificial Neural Networks (ANNs) implement a specific form of multi-variate extrapolation and will generate an output for any input pattern, even when there is no similar training pattern. Extrapolations are not necessarily to be trusted, and in order to support safety critical systems, we require such systems to give an indication of the training sample related uncertainty associated with their output. Some readers may think that this is a well known issue which is already covered by the basic principles of pattern recognition. We will explain below how this is not the case and how the conventional (Likelihood estimate of) conditional probability of classification does not correctly assess this uncertainty. We provide a discussion of the standard interpretations of this problem and show how a quantitative approach based upon long standing methods can be practically applied. The methods are illustrated on the task of early diagnosis of dementing diseases using Magnetic Resonance Imaging.
Introduction
Machine learning, and in particular artificial neural networks, have been applied successfully in a number of areas with state-of-the-art performance <|cite_start|> (Reference: A State-of-the-Art Survey on Deep Learning Theory and Architectures: In recent years, deep learning has garnered tremendous success in a variety of application domains. This new field of machine learning has been growing rapidly and has been applied to most traditional application domains, as well as some new areas that present more opportunities. Different methods have been proposed based on different categories of learning, including supervised, semi-supervised, and un-supervised learning. Experimental results show state-of-the-art performance using deep learning when compared to traditional machine learning approaches in the fields of image processing, computer vision, speech recognition, machine translation, art, medical imaging, medical information processing, robotics and control, bioinformatics, natural language processing, cybersecurity, and many others. This survey presents a brief survey on the advances that have occurred in the area of Deep Learning (DL), starting with the Deep Neural Network (DNN). The survey goes on to cover Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), including Long Short-Term Memory (LSTM) and Gated Recurrent Units (GRU), Auto-Encoder (AE), Deep Belief Network (DBN), Generative Adversarial Network (GAN), and Deep Reinforcement Learning (DRL). Additionally, we have discussed recent developments, such as advanced variant DL techniques based on these DL approaches. This work considers most of the papers published after 2012 from when the history of deep learning began. Furthermore, DL approaches that have been explored and evaluated in different application domains are also included in this survey. We also included recently developed frameworks, SDKs, and benchmark datasets that are used for implementing and evaluating deep learning approaches. There are some surveys that have been published on DL using neural networks and a survey on Reinforcement Learning (RL). However, those papers have not discussed individual advanced techniques for training large-scale deep learning models and the recently developed method of generative models.) <|cite_end|>. A key research challenge, identified by The Royal Society, is verification and robustness, especially for safety-critical applications, where the quality of decisions and predictions must be verifiable to a high standard.
This high standard of robustness must be maintained, not only in the large scale/big data scenario, but also in applications where only smaller amounts of labelled data are available.
Conventional descriptions of pattern recognition systems
relate the output of ANN's to conditional
probabilities of classification $P(C|{\bf X})$.
Although it would be convenient to assume that
this output tells us something useful about uncertainty,
in reality it does not. The problem here arises from
the density of samples (${\bf X}$) in the vicinity of the input pattern.
When there is only one pattern from which to determine the
output, $P(C|{\bf X})$ will be driven to a value of 0 or 1. Whereas
from a statistical perspective the sample size is simply not large
enough to be so confident. In order to really
understand our output we need to know not only $P(C|{\bf X})$
but also the total sample density which gave rise to it. This makes
it possible to check that output data will support decision making at a level
which meets performance specifications <|cite_start|> (Reference: Performance assessment of near-perfect machines: ) <|cite_end|>.
One way to explain this is to say that
the system output is the maximum Likelihood estimate of $P(C|{\bf X})$,
whereas what we need to know is the expectation value $E\left[P(C|{\bf X})\right]$.
We will illustrate this difference below for Binomial statistics,
which is the simplest sample based probability estimate.
Some have tackled
this problem using a ``what if'' approach, where an effort is made
to identify the specific pieces of information which have most influence <|cite_start|> (Reference: Towards Explainable AI for Channel Estimation in Wireless Communications: Research into 6G networks has been initiated to support a variety of critical artificial intelligence (AI) assisted applications such as autonomous driving. In such applications, AI-based decisions should be performed in a real-time manner. These decisions include resource allocation, localization, channel estimation, etc. Considering the black-box nature of existing AI-based models, it is highly challenging to understand and trust the decision-making behavior of such models. Therefore, explaining the logic behind those models through explainable AI (XAI) techniques is essential for their employment in critical applications. This manuscript proposes a novel XAI-based channel estimation (XAI-CHEST) scheme that provides detailed reasonable interpretability of the deep learning (DL) models that are employed in doubly-selective channel estimation. The aim of the proposed XAI-CHEST scheme is to identify the relevant model inputs by inducing high noise on the irrelevant ones. As a result, the behavior of the studied DL-based channel estimators can be further analyzed and evaluated based on the generated interpretations. Simulation results show that the proposed XAI-CHEST scheme provides valid interpretations of the DL-based channel estimators for different scenarios.) <|cite_end|>.
Bishop <|cite_start|> (Reference: Novelty detection and neural network validation: ) <|cite_end|>considered the problem of validating outputs from a multi-layer perceptron by explicitly modelling the density of the input space using a Parzen window <|cite_start|> (Reference: On estimation of a probability density function and
mode: Abstract : Given a sequence of independent identically distributed random variables with a common probability density function, the problem of the estimation of a probability density function and of determining the mode of a probability function are discussed. Only estimates which are consistent and asymptotically normal are constructed. (Author)) <|cite_end|>based approach. For, areas of the input space with low density the outputs are flagged as unreliable. In a similar approach, based on radial basis function networks, Leonard \emph{et al} <|cite_start|> (Reference: A NEURAL NETWORK ARCHITECTURE THAT COMPUTES ITS OWN RELIABILITY: ) <|cite_end|> <|cite_start|> (Reference: Using radial basis functions to approximate a function and its error bounds: A novel network called the validity index network (VI net) is presented. The VI net, derived from radial basis function networks, fits functions and calculates confidence intervals for its predictions, indicating local regions of poor fit and extrapolation.) <|cite_end|>use the hidden nodes of the network as the model of the input space density.
As well as flagging unreliable outputs due to low input density, the method attempt to put 95\% confidence intervals on the outputs. This is based on Student’s t-statistics, for 95\% confidence, of the cross-validation error, with the number of degree of freedom given by the number of input vectors that significantly activate contributing hidden nodes.
Uncertainties arising from artificial decision systems can be grouped into three categories, statistical uncertainties (due to perturbations in input data),
systematic uncertainties (due to the uncertainties associated with training) and bias
(due to use of a mis-specified functional model).
Ideally to understand the reliability of an output we need all of these.
Kendall and Gal refer to processes
of aleotoric and epistemic uncertainty, of which the former is the statistical error and
the latter is at least the systematic error. In this paper will assume that epistemic error
is systematic error and discuss bias as a separate issue.
The statistical uncertainty can be obtained in a relatively straightforward manner, by perturbing the input and observing the consequent variation in output. Numerical approaches based upon error prorogation can even make use of the derivatives used during training to make such assessments.
In the earlier work of Gal <|cite_start|> (Reference: Diagnostic Accuracy of the Posttraumatic Stress Disorder Checklist-civilian Version in a Predictors of Dropout in an Outpatient Treatment for Problem Drinkers including Cognitive-behavioral Therapy and the Opioid Antagonist Naltrexone. the Relationship between Borderline Personality Disorder and B: Predictors of dropout in an outpatient treatment for problem drinkers including cognitive-behavioral therapy and the opioid antagonist naltrexone. The relationship between borderline personality disorder and bipolar disorder. Reduction of cognitive concerns of anxiety sensitivity is uniquely associated with reduction of PTSD and depressive symptoms: A comparison of civilians and veterans. Integration of peer support and computer-based CBT for veterans with depression. Posttraumatic stress disorder is associated with limited executive resources in a working memory task. Cross-sectional prevalence survey of intimate partner violence perpetration and victimization in Canadian military personnel. Chronic traumatic encephalopathy and risk of suicide in former athletes. Motivation to persist with internet-based cognitive behavioural treatment using blended care: a qualitative study. Insomnia and its impact on physical and mental health. Neuroscience-driven discovery and development of sleep therapeutics. Representative Military Sample. Long-term stability of cognitive behavioral therapy effects for panic disorder with agoraphobia: A two-year follow-up study. Recent randomized controlled trials of psychological interventions in healthcare: A review of their quantity, scope, and characteristics. Change in sleep symptoms across Cognitive Processing Therapy and Prolonged Exposure: A longitudinal perspective. Correlates of cortisol in human hair: implications for epidemiologic studies on health effects of chronic stress. Comorbid depressive symptoms in treatment-seeking PTSD outpatients affect multiple domains of quality of life. Prescriptive variables for d-cycloserine augmentation of exposure therapy for posttraumatic stress disorder. Sleep disorders and the interpersonal-psychological theory of suicide: Independent pathways to suicidality? The role of sleep disturbance in the relationship between post-traumatic stress disorder and suicidal ideation. Longitudinal course of anxiety sensitivity and PTSD symptoms in cognitive-behavioral therapies for PTSD. Substance Abuse and Mental Health Services Administration SAMHSA's newly-released publication, Behavioral Health, United States, 2012, the latest in a series of publications issued by SAMHSA biannually since 1980, provides in-depth information regarding the current status of the mental health and substance abuse field. It includes behavioral health statistics at the national and State levels from 40 different data sources. The report includes three analytic chapters: The volume also includes 172 tables, which are organized into four sections: Behavioral Health of the Population: the mental health status of the U.S. population and prevalence of mental illness; Behavioral Health Service Utilization: providers and settings for behavioral health services; types of behavioral health services provided; and rates of utilization; Behavioral …) <|cite_end|>statistical uncertainty is modelled via a subjective belief in the smoothness of the output function. This prior assumption has a constraining effect similar to the covariance function used in Gaussian processes. In the more recent work, this term has been replaced as the amount of noise inherent in the observed output data, and is either tuned (for homoscedastic noise) or learned as function of the input data for heteroscedastic noise.
In order to estimate the systematic (epistemic) uncertainty associated with a predictor we need first either a quantitative description of the possible variations in training data, or a description of the uncertainty on the associated parameters (consistent with the former). Variations over data may be computationally intensive to assess. If it is possible to obtain the uncertainties in parameters then any
subsequent estimation of output uncertainty is likely to be more efficient, as the number of trained parameters should always be less than the number of training patterns.
A recent approach to this has been suggested by Gal and Ghahramani <|cite_start|> (Reference: Diagnostic Accuracy of the Posttraumatic Stress Disorder Checklist-civilian Version in a Predictors of Dropout in an Outpatient Treatment for Problem Drinkers including Cognitive-behavioral Therapy and the Opioid Antagonist Naltrexone. the Relationship between Borderline Personality Disorder and B: Predictors of dropout in an outpatient treatment for problem drinkers including cognitive-behavioral therapy and the opioid antagonist naltrexone. The relationship between borderline personality disorder and bipolar disorder. Reduction of cognitive concerns of anxiety sensitivity is uniquely associated with reduction of PTSD and depressive symptoms: A comparison of civilians and veterans. Integration of peer support and computer-based CBT for veterans with depression. Posttraumatic stress disorder is associated with limited executive resources in a working memory task. Cross-sectional prevalence survey of intimate partner violence perpetration and victimization in Canadian military personnel. Chronic traumatic encephalopathy and risk of suicide in former athletes. Motivation to persist with internet-based cognitive behavioural treatment using blended care: a qualitative study. Insomnia and its impact on physical and mental health. Neuroscience-driven discovery and development of sleep therapeutics. Representative Military Sample. Long-term stability of cognitive behavioral therapy effects for panic disorder with agoraphobia: A two-year follow-up study. Recent randomized controlled trials of psychological interventions in healthcare: A review of their quantity, scope, and characteristics. Change in sleep symptoms across Cognitive Processing Therapy and Prolonged Exposure: A longitudinal perspective. Correlates of cortisol in human hair: implications for epidemiologic studies on health effects of chronic stress. Comorbid depressive symptoms in treatment-seeking PTSD outpatients affect multiple domains of quality of life. Prescriptive variables for d-cycloserine augmentation of exposure therapy for posttraumatic stress disorder. Sleep disorders and the interpersonal-psychological theory of suicide: Independent pathways to suicidality? The role of sleep disturbance in the relationship between post-traumatic stress disorder and suicidal ideation. Longitudinal course of anxiety sensitivity and PTSD symptoms in cognitive-behavioral therapies for PTSD. Substance Abuse and Mental Health Services Administration SAMHSA's newly-released publication, Behavioral Health, United States, 2012, the latest in a series of publications issued by SAMHSA biannually since 1980, provides in-depth information regarding the current status of the mental health and substance abuse field. It includes behavioral health statistics at the national and State levels from 40 different data sources. The report includes three analytic chapters: The volume also includes 172 tables, which are organized into four sections: Behavioral Health of the Population: the mental health status of the U.S. population and prevalence of mental illness; Behavioral Health Service Utilization: providers and settings for behavioral health services; types of behavioral health services provided; and rates of utilization; Behavioral …) <|cite_end|>, using drop-out <|cite_start|> (Reference: {Dropout: A Simple Way to Prevent Neural Networks from Overfitting: Deep neural nets with a large number of parameters are very powerful machine learning systems. However, overfitting is a serious problem in such networks. Large networks are also slow to use, making it difficult to deal with overfitting by combining the predictions of many different large neural nets at test time. Dropout is a technique for addressing this problem. The key idea is to randomly drop units (along with their connections) from the neural network during training. This prevents units from co-adapting too much. During training, dropout samples from an exponential number of different "thinned" networks. At test time, it is easy to approximate the effect of averaging the predictions of all these thinned networks by simply using a single unthinned network that has smaller weights. This significantly reduces overfitting and gives major improvements over other regularization methods. We show that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.) <|cite_end|>in a monte carlo style approach, to estimate uncertainty on the outputs. It is shown that training a neural network with drop out is mathematically equivalent to a Gaussian process <|cite_start|> (Reference: {Gaussian processes for machine learning: Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible non-parametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations.13,78,31 The mathematical literature on GPs is large and often uses deep concepts which are not required to fully understand most machine learning applications. In this tutorial paper, we aim to present characteristics of GPs relevant to machine learning and to show up precise connections to other "kernel machines" popular in the community. Our focus is on a simple presentation, but references to more detailed sources are provided.) <|cite_end|>. Drop out is used as computationally efficient approximation to variational inference <|cite_start|> (Reference: {Practical variational inference for neural networks: Variational methods have been previously explored as a tractable approximation to Bayesian inference for neural networks. However the approaches proposed so far have only been applicable to a few simple network architectures. This paper introduces an easy-to-implement stochastic variational method (or equivalently, minimum description length loss function) that can be applied to most neural networks. Along the way it revisits several common regularisers from a variational perspective. It also provides a simple pruning heuristic that can both drastically reduce the number of network weights and lead to improved generalisation. Experimental results are provided for a hierarchical multidimensional recurrent neural network applied to the TIMIT speech corpus.) <|cite_end|> <|cite_start|> (Reference: Weight Uncertainty in Neural Networks: We introduce a new, efficient, principled and backpropagation-compatible algorithm for learning a probability distribution on the weights of a neural network, called Bayes by Backprop. It regularises the weights by minimising a compression cost, known as the variational free energy or the expected lower bound on the marginal likelihood. We show that this principled kind of regularisation yields comparable performance to dropout on MNIST classification. We then demonstrate how the learnt uncertainty in the weights can be used to improve generalisation in non-linear regression problems, and how this weight uncertainty can be used to drive the exploration-exploitation trade-off in reinforcement learning.) <|cite_end|> <|cite_start|> (Reference: Keeping the neural networks simple by minimizing the description length of the weights: Supervised neural networks generalize well if there is much less information in the weights than there is in the output vectors of the training cases. So during learning, it is important to keep the weights simple by penalizing the amount of information they contain. The amount of information in a weight can be controlled by adding Gaussian noise and the noise level can be adapted during learning to optimize the trade-o(cid:11) between the expected squared error of the network and the amount of information in the weights. We describe a method of computing the derivatives of the expected squared error and of the amount of information in the noisy weights in a network that contains a layer of non-linear hidden units. Provided the output units are linear, the exact derivatives can be computed e(cid:14)ciently without time-consuming Monte Carlo simulations. The idea of minimizing the amount of information that is required to communicate the weights of a neural network leads to a number of interesting schemes for encoding the weights.) <|cite_end|>without increasing the number of model parameters. However, the equivalence only holds for large numbers of hidden nodes and is thus not fully scalable or universally valid.
Predictive uncertainty, due to systematic uncertainty on the weights is obtained, using drop-out as a Monte Carlo integration, by estimating the first two raw moments, under the assumption that the joint density of the outputs are diagonal multivariate normal. In this case, the number of terms in the integration is limited by the number of weights, and only appropriate for large scale networks. Even with large scale networks, the multivariate joint density of the weights is not fully sampled and correlations between parameters, which may be significant <|cite_start|> (Reference: Ensemble learning in bayesian neural networks: Bayesian treatments of learning in neural networks are typically based either on a local Gaussian approximation to a mode of the posterior weight distribution, or on Markov chain Monte Carlo simulations. A third approach, called ensemble learning, was introduced by Hinton and van Camp (1993). It aims to approximate the posterior distribution by minimizing the Kullback-Leibler divergence between the true posterior and a parametric approximating distribution. The original derivation of a deterministic algorithm relied on the use of a Gaussian approximating distribution with a diagonal covariance matrix and hence was unable to capture the posterior correlations between parameters. In this chapter we show how the ensemble learning approach can be extended to full-covariance Gaussian distributions while remaining computationally tractable. We also extend the framework to deal with hyperparameters, leading to a simple re-estimation procedure. One of the benefits of our approach is that it yields a strict lower bound on the marginal likelihood, in contrast to other approximate procedures.) <|cite_end|>, are not accounted for. In more recent work <|cite_start|> (Reference: Concrete dropout: Dropout is used as a practical tool to obtain uncertainty estimates in large vision models and reinforcement learning (RL) tasks. But to obtain well-calibrated uncertainty estimates, a grid-search over the dropout probabilities is necessary—a prohibitive operation with large models, and an impossible one with RL. We propose a new dropout variant which gives improved performance and better calibrated uncertainties. Relying on recent developments in Bayesian deep learning, we use a continuous relaxation of dropout’s discrete masks. Together with a principled optimisation objective, this allows for automatic tuning of the dropout probability in large models, and as a result faster experimentation cycles. In RL this allows the agent to adapt its uncertainty dynamically as more data is observed. We analyse the proposed variant extensively on a range of tasks, and give insights into common practice in the field where larger dropout probabilities are often used in deeper model layers.) <|cite_end|>, the discrete form of drop-out is relaxed to a continuous Concrete distribution <|cite_start|> (Reference: The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables: The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables---continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks.) <|cite_end|>with improved uncertainty estimates when compared to synthetic data with known uncertainties.
We wish to make explicit the amount of epistemic uncertainty associated with any decision system
and the consequent uncertainty then arising during use.
We will tackle the problem using approaches more closely related to
formal statistics than the work cited above (see for example <|cite_start|> (Reference: Network information criterion-determining the number of hidden units for an artificial neural network model: The problem of model selection, or determination of the number of hidden units, can be approached statistically, by generalizing Akaike's information criterion (AIC) to be applicable to unfaithful (i.e., unrealizable) models with general loss criteria including regularization terms. The relation between the training error and the generalization error is studied in terms of the number of the training examples and the complexity of a network which reduces to the number of parameters in the ordinary statistical theory of AIC. This relation leads to a new network information criterion which is useful for selecting the optimal network model based on a given training set.) <|cite_end|>), but will explain below
how earlier methods were restricted both by numerical practicalities and unrealistic approximations.
In our opinion, a quantitative statistical approach for the assessment of epistemic uncertainty
would involve sampling of weights from their expected uncertainty distribution
whilst maintaining a high standard of validation, even for cases involving small data and/or networks.
The correct interpretation of training cost functions is Likelihood,
and its maximisation is the de-facto method for parameter estimation.
However, even when taking steps to ensure that the Likelihood
construct is a valid statistical description of data uncertainty (i.e. honest <|cite_start|> (Reference: Moving Towards Probability Forecasting: This paper proposes an international collaboration between researchers in academia and policymaking institutions to stimulate and coordinate research on probability forecasting in macroeconomics, developing a toolbox for short-term prediction. The toolbox should include time series models, methods for forecast combination, and techniques for probabilistic forecast evaluation in order to reduce the setup costs and risks to both individual researchers and policymaking organizations. A particular emphasis should be placed on replication studies with the toolbox so that central bankers can be sure that they are utilizing best practice techniques to produce probabilistic forecasts of events of interest. Full publication: Globalisation and Inflation Dynamics in Asia and the Pacific) <|cite_end|>),
the variation of the Likelihood function over the parameter has the wrong
properties to allow it to be interpreted directly as a parameter density.
Regardless of the choice of probability framework (Bayesian or
Frequentist) it is accepted that a Likelihood function needs to be multiplied
by something which is a function of the parameters in order to construct
a consistent (Bayesian) or quantitatively valid (Frequentist) description
of uncertainty. Therefore, our first task is to identify the principles
which specify how this should be done.
\subsection{Background Theory \& Notation}
We introduce here the theory associated with estimation of parameter
uncertainty when using Likelihood in order to identify the
relationships between competing methodologies and associated approximations.
Suppose we have a dataset of $n$ i.i.d. observations: $\mathbf{X} = \{X_{i}:i=1,\ldots n\} = \{X_{1},\ldots,X_{n}\}$, with an associated family of parametric
conditional pdf(s)
$p(X_{i}|\theta)$. We hence can write the pdf for the entire dataset as:
\mbox{$p(\mathbf{X}|\theta) \equiv \prod\limits_{i=1}^{n}p(X_{i}|\theta)$}. The Likelihood and log-likelihood for the observations are then:
\begin{equation}
\Like(\theta,\mathbf{X}) \equiv \Like_{n}(\theta) =
p(\mathbf{X}|\theta), \:\: l_{n} = \logL(\theta,\mathbf{X}), \:\: \mbox{and we define} \:\:\: \hat{\theta}(\mathbf{X})\:\: \mbox{s.t.} \:\: \left.\frac{\partial \Like(\theta,\mathbf{X})}{\partial \theta}\right|_{\theta=\hat{\theta}} = 0,
\end{equation}
where $\hat{\theta}(\mathbf{X})$ or $\MLEn$ is the maximum-likelihood estimator for the parameter. The standard Bayesian approach is to construct a posterior distribution over the space of parameters conditioned on the data by means of a prior distribution on parameter space $\pi(\theta)$ thus (see Jeffreys <|cite_start|> (Reference: The Theory of Probability: In the 19th century, both the ideas and methods of the theory of probability, developed since the second half of the 17th century, received new incentives for further progress. These stimuli, highly differing one from another in their essence, were connected with the development of the natural sciences, with practical requirements of society and, also, with the formulation of purely mathematical problems.) <|cite_end|>, Theorem 10):
\begin{equation}
\pi^*(\theta|\mathbf{X}) \propto \Like(\theta,\mathbf{X})\pi(\theta). \label{eqn:postdef}
\end{equation}
Ideally, in the absence of any other meaningful estimate of the priors,
we need an uninformative prior. Unfortunately, {\bf this is not as simple as
making $\pi({\bf \theta})$ uniform} (e.g. as implicitly assumed
in <|cite_start|> (Reference: Network information criterion-determining the number of hidden units for an artificial neural network model: The problem of model selection, or determination of the number of hidden units, can be approached statistically, by generalizing Akaike's information criterion (AIC) to be applicable to unfaithful (i.e., unrealizable) models with general loss criteria including regularization terms. The relation between the training error and the generalization error is studied in terms of the number of the training examples and the complexity of a network which reduces to the number of parameters in the ordinary statistical theory of AIC. This relation leads to a new network information criterion which is useful for selecting the optimal network model based on a given training set.) <|cite_end|> <|cite_start|> (Reference: Model selection in neural networks: ) <|cite_end|>), as this makes a fundamental (and
unfortunately common) error regarding the correct use of probabilities
and probability densities{\footnote{An uninformative prior
{\it probability} is flat, an uninformative {\it density} is not, and for
arbitrary parameters, see below, $\pi$ is the latter. This difference is hard to comprehend if no distinction is made between the two.}}.
Alternatively, a Jeffreys prior can also be constructed by using his `general rule', and requiring that the functional form of the prior is invariant under arbitrary transformations of the parameter space <|cite_start|> (Reference: An Invariant Form for the Prior Probability in Estimation Problems: It is shown that a certain differential form depending on the values of the parameters in a law of chance is invariant for all transformations of the parameters when the law is differentiable with regard to all parameters. For laws containing a location and a scale parameter a form with a somewhat restricted type of invariance is found even when the law is not everywhere differentiable with regard to the parameters. This form has the properties required to give a general rule for stating the prior probability in a large class of estimation problems.) <|cite_end|>.
However, we here instead give a na\"{\i}ve derivation for the case of a single parameter, in order to make clear the link between the Jeffreys priors under the Bayesian approach, and the approach taken by Welch and by Peers <|cite_start|> (Reference: On Formulae for Confidence Points Based on Integrals of Weighted Likelihoods: ) <|cite_end|>, which leads to approaches that seek to achieve a Bayesian-Frequentist synthesis <|cite_start|> (Reference: Objective priors: An introduction for frequentists: Bayesian methods are increasingly applied in these days in the theory and practice of statistics. Any Bayesian inference depends on a likelihood and a prior. Ideally one would like to elicit a prior from related sources of information or past data. However, in its absence, Bayesian methods need to rely on some "objective" or "default" priors, and the resulting posterior inference can still be quite valuable. Not surprisingly, over the years, the catalog of objective priors also has become prohibitively large, and one has to set some specific criteria for the selection of such priors. Our aim is to review some of these criteria, compare their performance, and illustrate them with some simple examples. While for very large sample sizes, it does not possibly matter what objective prior one uses, the selection of such a prior does influence inference for small or moderate samples. For regular models where asymptotic normality holds, Jeffreys' general rule prior, the positive square root of the determinant of the Fisher information matrix, enjoys many optimality properties in the absence of nuisance parameters. In the presence of nuisance parameters, however, there are many other priors which emerge as optimal depending on the criterion selected. One new feature in this article is that a prior different from Jeffreys' is shown to be optimal under the chi-square divergence criterion even in the absence of nuisance parameters. The latter is also invariant under one-to-one reparameterization.) <|cite_end|> <|cite_start|> (Reference: The selection of prior distributions by formal rules: Abstract Subjectivism has become the dominant philosophical foundation for Bayesian inference. Yet in practice, most Bayesian analyses are performed with so-called “noninformative” priors, that is, priors constructed by some formal rule. We review the plethora of techniques for constructing such priors and discuss some of the practical and philosophical issues that arise when they are used. We give special emphasis to Jeffreys's rules and discuss the evolution of his viewpoint about the interpretation of priors, away from unique representation of ignorance toward the notion that they should be chosen by convention. We conclude that the problems raised by the research on priors chosen by formal rules are serious and may not be dismissed lightly: When sample sizes are small (relative to the number of parameters being estimated), it is dangerous to put faith in any “default” solution; but when asymptotics take over, Jeffreys's rules and their variants remain reasonable choices. We also provide an annotated b...) <|cite_end|>.
We will then exploit this interpretation to develop
a practical method for use with highly non-linear systems, i.e. ANNs.
\subsubsection{Interpretation of Jeffreys Priors.}
We show below how the Jeffreys prior is
related to the process of re-mapping parameters to achieve Gaussian
Likelihood functions.
We start with the 1-parameter log-likelihood $l_{n}(\theta)$ or Likelihood $\Like_{n}(\theta)$ for $n$ data points, and hence the posterior:
\[
\pi^{*}_{n}(\theta) \propto \Like_{n}(\theta)\pi(\theta).
\]
We will consider the case where $n$ is large, and we will assume that the Likelihood is then strongly-peaked about the maximum likelihood estimate (MLE) of the parameter $\MLEn$. We then first shift and scale to define a new parameter, $\theta \rightarrow t$, where:
\begin{equation}
t \doteq \sqrt{n}(\theta-\MLEn)\hat{I}_{n}^{\frac{1}{2}}, \:\: \mbox{and:} \:\: \hat{I}_{n} = -\frac{1}{n}\left.\frac{d^{2}l_{n}}{d\theta^{2}}\right|_{\theta=\MLEn}.\label{eqn:FINdef}
\end{equation}
We here have introduced the empirical Fisher Information function per data point $I_{n}$, which has been replaced by its value at the optimum $\hat{I}_{n} \equiv I_{n}(\MLEn)$. We see that the new variable is centred on the peak of the Likelihood, and the width of the peak has been scaled by the second derivative at the peak (the curvature at the optimum) <|cite_start|> (Reference: Statistical methods and scientific inference.: ) <|cite_end|>.
We now expand the posterior about its value at $\theta=\MLEn$, by noting that $\theta-\MLEn$ is $O(n^{-\frac{1}{2}})$ (i.e. the width of the peak of $\Like_{n}(\theta)$ about the optimum scales as $\frac{1}{\sqrt{n}}$). We now write <|cite_start|> (Reference: Infereni on full or partial parameters based on the standardized signed log likelihood ratio: On etudie des tests et des limites de confiance bases sur la statistique du rapport de log vraisemblance signee, en ajustant cette statistique de telle sorte qu'a un ordre superieur d'approximation, la distribution normale standard soit valable) <|cite_end|>:
\[
\pi^{*}_{n}(t) \doteq C_{n}^{-1}\exp\left[l_{n}(\theta) - l_{n}(\MLEn)\right]\pi(\theta),
\]
where $C_{n}$ is the normalisation term. The Taylor expansion for the log-likelihood is:
\begin{align}
l_{n}(\theta)-l_{n}(\MLEn) &= \frac{1}{2}(\theta-\MLEn)^{2}l_{n}'' + \frac{1}{6}(\theta-\MLEn)^{3}l_{n}''' + O(n^{-1}),
\end{align}
where we have used prime notation to denote multiple derivatives, and remembering that since we are expanding about the optimum, $l_{n}'\equiv 0$. Note also that since $l_{n}$ is trivially $O(n)$, we have to include the third powers to get a term of $O(n^{-\frac{1}{2}})$. We also expand the prior:
\[
\pi(\theta) = \pi(\MLEn)\left[1 + (\theta-\MLEn)\left(\frac{\pi'}{\pi}\right) + O(n^{-1})\right],
\]
which is just the first, linear, correction to a constant prior.
Substituting in for $(\theta-\MLEn)$ in terms of $t$, after some algebra we find:
\begin{align}
\pi^{*}_{n}(t) &= \frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}t^{2}\right)\left[1 + \frac{t}{(n\hat{I}_{n})^{\frac{1}{2}}}\left(\frac{\pi'}{\pi}\right) + \frac{t^{3}l'''_{n}}{6(n\hat{I}_{n})^{\frac{3}{2}}}\right] \nonumber \\
&+ O(n^{-1}).
\end{align}
We note that this expression is in agreement with the related expressions given by Welch and Peers <|cite_start|> (Reference: On Formulae for Confidence Points Based on Integrals of Weighted Likelihoods: ) <|cite_end|>, although their method is more general, since they use the full moment generating function. The zeroth order piece of the posterior is a centred, unit Gaussian, which shows that we correctly scaled and shifted the parameter $\theta \rightarrow t$. The first-order corrections are both odd polynomials in $t$, hence the normalization is just the usual term $\frac{1}{\sqrt{2\pi}}$ for a Gaussian, with no corrections required at this order\footnote{The pi in the normalization should not be confused with the function $\pi(\cdot)$ used for the prior.}. The full result says that the first correction to the zeroth-order Gaussian comes from two terms, the third derivative of the Likelihood at the optimum (which gives the amount to which the Likelihood is not symmetric about that optimum), and the term which shows to what extent the prior is not symmetric about the optimum (that is, if $\pi' \ne 0$).
We might want to try getting these two terms to cancel, and hence have a posterior that is Gaussian to $O(n^{-1})$. But we cannot do this algebraically in $t$, since the expansion in powers of $n$ mixes powers of $t$, so that here we have both linear and cubic terms in $t$ of the same order in $n$. However, given the statements above about the symmetry of the Likelihood, and the symmetry of the prior, we can instead require that the posterior should also be `symmetric', or at least centred. That is, if we require that the expectation value of $t$ under the posterior vanishes to first order:
\begin{equation}
E_{\pi^{*}_{n}}\left[t\right] \doteq \int dt \: t\; \pi^{*}_{n}(t) = 0 + O(n^{-1}).
\end{equation}
The integrals can be computed, since we just need the expectation values of powers under a unit Gaussian. These are given by the formula:
\[
E_{N(0,1)}\left[ t^{2m} \right] = \frac{(-1)^{m}}{\sqrt{2}}\left.\left(\frac{d}{d\alpha}\right)^{m} \frac{1}{\sqrt{\alpha}}\right|_{\alpha = \frac{1}{2}},
\]
where $N(\mu,\sigma^{2})$ is a Gaussian distribution of mean $\mu$ and variance $\sigma^{2}$. We hence find:
\begin{equation}
E_{\pi^{*}_{n}}\left[t\right] = \frac{1}{\sqrt{n}\hat{I}_{n}^{\frac{3}{2}}}\left[
\hat{I}_{n} \left(\frac{\pi'}{\pi}\right) + \frac{1}{2n}\frac{d^{3}l_{n}}{d\theta^{3}}
\right]_{\theta=\MLEn} + O(n^{-1}).
\end{equation}
The posterior is hence centred to first-order if:
\[
\hat{I}_{n} \left(\frac{\pi'}{\pi}\right) + \frac{1}{2n}\frac{d^{3}l_{n}}{d\theta^{3}} = 0.
\]
Using the definition of $\hat{I}_{n}$ from (\ref{eqn:FINdef}), this can be rearranged to give the differential equation:
\begin{equation}
\hat{I}_{n}\pi' = \frac{\pi}{2}\frac{d}{d\theta}\hat{I}_{n} \:\: \Rightarrow \:\:
\frac{d}{d\theta}\left[\pi(\theta)\hat{I}_{n}^{-\frac{1}{2}}(\theta)\right] = 0.
\end{equation}
This differential equation corresponds to equations (29) \& (30) of Welch and Peers <|cite_start|> (Reference: On Formulae for Confidence Points Based on Integrals of Weighted Likelihoods: ) <|cite_end|>, and the equation for a first-order matching prior from the probabilistic-matching priors literature (e.g., see Ghosh <|cite_start|> (Reference: Objective priors: An introduction for frequentists: Bayesian methods are increasingly applied in these days in the theory and practice of statistics. Any Bayesian inference depends on a likelihood and a prior. Ideally one would like to elicit a prior from related sources of information or past data. However, in its absence, Bayesian methods need to rely on some "objective" or "default" priors, and the resulting posterior inference can still be quite valuable. Not surprisingly, over the years, the catalog of objective priors also has become prohibitively large, and one has to set some specific criteria for the selection of such priors. Our aim is to review some of these criteria, compare their performance, and illustrate them with some simple examples. While for very large sample sizes, it does not possibly matter what objective prior one uses, the selection of such a prior does influence inference for small or moderate samples. For regular models where asymptotic normality holds, Jeffreys' general rule prior, the positive square root of the determinant of the Fisher information matrix, enjoys many optimality properties in the absence of nuisance parameters. In the presence of nuisance parameters, however, there are many other priors which emerge as optimal depending on the criterion selected. One new feature in this article is that a prior different from Jeffreys' is shown to be optimal under the chi-square divergence criterion even in the absence of nuisance parameters. The latter is also invariant under one-to-one reparameterization.) <|cite_end|>Eqn.~(4.3)). The solution of this equation is:
\[
\pi(\theta) \propto \hat{I}_{n}^{\frac{1}{2}}(\theta).
\]
which is just the Jeffreys General Rule prior <|cite_start|> (Reference: An Invariant Form for the Prior Probability in Estimation Problems: It is shown that a certain differential form depending on the values of the parameters in a law of chance is invariant for all transformations of the parameters when the law is differentiable with regard to all parameters. For laws containing a location and a scale parameter a form with a somewhat restricted type of invariance is found even when the law is not everywhere differentiable with regard to the parameters. This form has the properties required to give a general rule for stating the prior probability in a large class of estimation problems.) <|cite_end|>. For a vector of parameters $\boldsymbol{\theta}$, the analogous Jeffreys prior (ignoring scaling and shifts) can be taken as:
\begin{equation}
\pi(\boldsymbol{\theta}) \propto \det(\mathbf{I}(\boldsymbol{\theta}))^{1/2}, \label{eqn:JeffDef}
\end{equation}
with the elements of the Fisher Information \emph{matrix} being defined as
\begin{equation}
I_{ij}(\boldsymbol{\theta}) \doteq E\left[ -\frac{\partial^{2}} {\partial \theta_i \partial \theta_j}\logL \right]\label{eqn:FIMdef}
\end{equation}
These terms are otherwise known as the terms of the inverse parameter covariances $C_{ij}^{-1}$ <|cite_start|> (Reference: Numerical recipes in C (2nd ed.): the art of scientific computing: ) <|cite_end|>and can be recognised as
the standard approach for representation and estimation of parameter uncertainty.
\subsubsection{An Alternative Way to Understand Parameter Uncertainty}
Use of equation (10) assumes we are working in a regime where $n$ can be taken to be
large so that the uncertainty associated with parameters converges to a multivariate
Gaussian (Normal) distribution. This is generally \emph{not} the case for ANNs.
Difficulties of tractability also arise when applying equation (9) to evaluate
equation (2), both with the practicalities of computing second derivatives and also
the use of an expectation value (generally ignored).
Although it has been suggested that second derivatives should be computable on an ANN via a simple extension to the back-propagation training algorithm <|cite_start|> (Reference: Neural Networks For Pattern Recognition: Highlights of adaptive resonance theory classifying spatial patterns classifying temporal patterns multilayer networks and the use of attention representing synonyms specific architectures that use presynaptic inhibition. Appendices: feedforward circuits for normalization and noise suppression network equations used in the simulations of chapter 3 network equations used in the simulations of chapter 4.) <|cite_end|>, the authors still know of no general method.
However, the na\"{\i}ve derivation of the simple Jeffreys prior above gives us the link between the Bayesian definition of Jeffreys invariant priors and the Frequentist literature on probability matching priors~(see <|cite_start|> (Reference: Objective priors: An introduction for frequentists: Bayesian methods are increasingly applied in these days in the theory and practice of statistics. Any Bayesian inference depends on a likelihood and a prior. Ideally one would like to elicit a prior from related sources of information or past data. However, in its absence, Bayesian methods need to rely on some "objective" or "default" priors, and the resulting posterior inference can still be quite valuable. Not surprisingly, over the years, the catalog of objective priors also has become prohibitively large, and one has to set some specific criteria for the selection of such priors. Our aim is to review some of these criteria, compare their performance, and illustrate them with some simple examples. While for very large sample sizes, it does not possibly matter what objective prior one uses, the selection of such a prior does influence inference for small or moderate samples. For regular models where asymptotic normality holds, Jeffreys' general rule prior, the positive square root of the determinant of the Fisher information matrix, enjoys many optimality properties in the absence of nuisance parameters. In the presence of nuisance parameters, however, there are many other priors which emerge as optimal depending on the criterion selected. One new feature in this article is that a prior different from Jeffreys' is shown to be optimal under the chi-square divergence criterion even in the absence of nuisance parameters. The latter is also invariant under one-to-one reparameterization.) <|cite_end|>and <|cite_start|> (Reference: Probability matching priors: higher order asymptotics: The remainder of the book turns to more statistical and applied issues but is relatively short in comparison to the foregoing material. Chapter 10 discusses estimation of the parameters of a multivariate t distribution including the use of the expectation maximization algorithm or one of its variants to obtain maximum likelihood estimates. Chapter 11 deals with classical and Bayesian inference in regression models when the errors are assumed to come from an uncorrelated multivariate t distribution. The implications of this model are that the errors are identically distributed but statistically dependent despite their lack of correlation. The applications in Chapter 12 are really notes for further reading. Projection pursuit, portfolio optimization, cluster analysis, and multiple decision problems are mentioned. In summary, this book is a useful reference work for anyone with an interest in non-Gaussian continuous multivariate distributions. Some form of heavytailed multivariate t distribution is often a natural alternative model to the normal distribution, and this book successfully summarizes what is known about such distributions. No doubt there are some omissions, as is to be expected when the relevant literature is very scattered, but the authors have done a useful service in bringing together a lot of useful material in one compact volume.) <|cite_end|>),
and indicates how we can progress.
It is important to note that the degrees of freedom we are manipulating here by defining a prior over the parameter(s) is our freedom to reparameterise our original family of model pdfs. A Likelihood or an integrated Likelihood is not a pdf or a probability. Under a redefinition of parameter $\theta \mapsto \omega(\theta)$, we have that:
\begin{equation}
\widetilde{\Like}_{n}(\omega(\theta)) \equiv \Like_{n}(\theta) \:\: \mbox{and:} \:\: \widetilde{\Like}_{n}(\omega(\theta))d\omega \equiv \Like_{n}(\theta)\left(\frac{d\omega}{d\theta}\right)d\theta
\equiv \widetilde{\Like}_{n}(\omega(\theta))\left(\frac{d\omega}{d\theta}\right)d\theta, \label{eqn:Ltildedef}
\end{equation}
where $\widetilde{\Like}_{n}$ is our new Likelihood function under reparameterisation. In particular, this means that the ordering of Likelihood values is preserved, hence $\hat{\omega}_{n} \equiv \omega(\MLEn)$ and the optimum Likelihood remains the optimum.
We hence see that the derivative $\frac{d\omega}{d\theta}$ of the reparameterisation function takes the place of the Bayesian prior $\pi(\theta)$, and the mapped Likelihood function $\widetilde{\Like}_{n}(\omega(\theta))$ generated from the original Likelihood function replaces the posterior $\pi^{*}_{n}(\theta)$ <|cite_start|> (Reference: On Formulae for Confidence Points Based on Integrals of Weighted Likelihoods: ) <|cite_end|>as generated from the Likelihood function. The requirement that a prior pdf is non-negative becomes the requirement that our reparameterisation function $\omega(\theta)$ is monotonically non-decreasing (that is, $\frac{d\omega}{d\theta}\ge 0 $).
The use of a Jeffreys prior can be considered as only the start of an iterative process, for which the Gaussian mapped parameter is ultimately the result (see Appendix~\ref{app1})
\footnote{An iterative sequence of invariant priors, starting from the Jeffreys prior, was also investigated by Dowe. See <|cite_start|> (Reference: {MML: 본 논문에서는 웹을 통한 저작과 정보자원의 전달을 위해 완전하고, 플랫폼과 시스템에 독립적인 환경을 제공하는 XML 구조 문서를 보다 쉽게 편집하기 위한 XML 구조 문서 에디터를 설계하고 구현한다. 또한, 기 생성된 XML 문서를 스타일시트 편집기를 통해 스타일정보를 처리할 수 있는 기능도 구현한다. 이를 위해 XML 구조 문서 에디터의 기능적 요구사항 및 이를 수용하기 위한 에디터 구조를 제안하고 프로토타입을 구현하였다.) <|cite_end|>, \S7.1, page 953 for an example involving the multinomial distribution.}.
Rather than applying the Jeffreys prior/mapping itself, singly or iteratively
, we instead chose to map a suitable portion of the Likelihood function directly to a Gaussian.
We hence will require that:
\[
\frac{\Like_{n}(\theta)}{\Like_{n}(\MLEn)} = \frac{\widetilde{\Like}_{n}(\omega)}{\widetilde{\Like}_{n}(\hat{\omega})} = \exp \left(-\frac{1}{2}(\omega-\hat{\omega})^{2}\right) \:\: \Rightarrow \:\: \ln\left(\frac{\Like_{n}(\theta)}{\Like_{n}(\MLEn)}\right) = -\frac{1}{2}(\omega-\hat{\omega})^{2}.
\]
Therefore, if we centre the MLE such that $\hat{\omega} \equiv \omega(\MLEn) = 0$ then:
\begin{equation}
\omega(\theta) = \sign (\theta-\MLEn)\sqrt{-2\ln (\Like_{n}(\theta)/\Like_{n}(\MLEn))} \equiv z(\theta),\label{eqn:zdef}
\end{equation}
which is just the function $z(\cdot)$, defined as the signed square-root of the log-likelihood ratio statistic <|cite_start|> (Reference: Similar tests and the standardized log likelihood ratio statistic: SUMMARY When testing an affine hypothesis in an exponential family the 'ideal' procedure is to calculate the exact similar test, or an approximation to this, based on the conditional distribution given the minimal sufficient statistic under the null hypothesis. By contrast to this there is a 'primitive' approach in which the marginal distribution of a test statistic is considered and any nuisance parameter appearing in the test statistic is replaced by an estimate. We show here that when using standardized likelihood ratio statistics the 'primitive' procedure is in fact an 'ideal' procedure to order O(n-3/2). As an example we consider inference for the mean of a log normal distribution in detail.) <|cite_end|> <|cite_start|> (Reference: Simple modifications for signed roots of likelihood ratio statistics: SUMMARY In the context of inference about a scalar parameter in the presence of nuisance parameters, some simple modifications for the signed root of the log-likelihood ratio statistic R are developed that reduce the order of error in the standard normal approximation to the distribution of R from O(n- 1/2) to O(n- 1). Barndorff-Nielsen has introduced a variable U such that the error in the standard normal approximation to the distribution of R +R- R log(U/R) is of order O(n-3/2), but calculation of Urequires the specification of an exact or approximate ancillary statistic A. This paper proposes an alternative variable to U, denoted by T, that is available without knowledge of A and satisfies T= U+ Op(n 1) in general. Thus the standard normal approximation to the distribution of R +R'- log(T/R) has error of order O(n- 1), and it can be used to construct approximate confidence limits having coverage error of order O(n- 1). In certain cases, however, Tand Uare identical. The derivation of Tinvolves the Bayesian approach to constructing confidence limits considered by Welch and Peers, and Peers. Similar modifications for the signed root of the conditional likelihood ratio statistic are also developed, and these modifications are seen to be useful when a large number of nuisance parameters are present. Several examples are presented, including inference for natural parameters in exponential models and inference about location-scale models with type II censoring. In each case, the relationship between Tand U is discussed. Numerical examples are also given, including inference for regression models, inference about the means of log-normal distributions and inference for exponential lifetime models with type I censoring, where Barndorff-Nielsen's variable U is not available.) <|cite_end|>.
It has been shown previously that hypothesis tests constructed for
$z(\theta)$ are
rectangular <|cite_start|> (Reference: On Formulae for Confidence Points Based on Integrals of Weighted Likelihoods: ) <|cite_end|>and honest <|cite_start|> (Reference: Moving Towards Probability Forecasting: This paper proposes an international collaboration between researchers in academia and policymaking institutions to stimulate and coordinate research on probability forecasting in macroeconomics, developing a toolbox for short-term prediction. The toolbox should include time series models, methods for forecast combination, and techniques for probabilistic forecast evaluation in order to reduce the setup costs and risks to both individual researchers and policymaking organizations. A particular emphasis should be placed on replication studies with the toolbox so that central bankers can be sure that they are utilizing best practice techniques to produce probabilistic forecasts of events of interest. Full publication: Globalisation and Inflation Dynamics in Asia and the Pacific) <|cite_end|>, as required for
practical use.
The idea can also be seen to be consistent with the work of Cram\'{e}r and Rao regarding the minimum variance bound~(MVB), in the sense that the MVB is saturated when the Likelihood function is exactly Gaussian with a known mean, since efficient estimators for the variance exist in this case~(e.g., see Cram\'{e}r <|cite_start|> (Reference: Mathematical methods of statistics: In this text about statistical mathematical theory, Harald Cramer joins two major lines of development in the field: while British and American statisticians were developing the science of statistical inference, French and Russian probablists transformed the classical calculus of probability into a rigorous and purely mathematical theory. The result of Cramer's work is an exposition of the mathematical methods of modern statistics that set the standard that others have since sought to follow. The first part of the book is an introduction to the fundamental concept of a distribution and of integration with respect to a distribution. The second part contains the general theory of random variables and probability distributions while the third is devoted to the theory of sampling statistical estimation and tests of significance.) <|cite_end|>, Chapter 32).
This general approach was known about at least as far back as Anscombe\footnote{Who also said <|cite_start|> (Reference: Normal likelihood functions: ) <|cite_end|>, ``\emph{Typically it is the evidence from a small body of data (often corresponding to a non-normal Likelihood function) that is difficult to grasp precisely.}"} in 1964 <|cite_start|> (Reference: Normal likelihood functions: ) <|cite_end|> <|cite_start|> (Reference: Parametrizations of Non‐Linear Models: ) <|cite_end|>.
It can also be related to the more familiar case of the Fisher $z$-transformation{\footnote{For the specific case of sample correlation coefficients, the highly-skewed nature of the sampling distribution, even for large sample sizes, made the standard correlation coefficient unsuitable when it came to assessing the accuracy of observed correlations. Fisher showed that a simple transformation based on the hyperbolic tangent reduced these curves to close approximations to the normal distribution, with a variance that is stable over different values of the true correlation <|cite_start|> (Reference: FREQUENCY DISTRIBUTION OF THE VALUES OF THE CORRELATION COEFFIENTS IN SAMPLES FROM AN INDEFINITELY LARGE POPU;ATION: ) <|cite_end|> <|cite_start|> (Reference: 014: On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample.: ) <|cite_end|>.}}, which
Winterbottom <|cite_start|> (Reference: A note on the derivation of Fisher's transformation of the correlation coefficient: Abstract Fisher's transformation of the bivariate-normal correlation coefficient is usually derived as a variance-stabilizing transformation and its normalizing property is then demonstrated by the reduced skewness of the distribution resulting from the transformation. In this note the transformation is derived as a normalizing transformation that incorporates variance stabilization. Some additional remarks are made on the transformation and its uses.) <|cite_end|>shows can be derived by first requiring that it reduces skewness, and that, after bias correction, is \emph{both} normalising and variance-stabilising.
In conclusion, the Jeffreys prior can be derived as an approximation to the
process of mapping the original Likelihood function onto a Gaussian.
Frequentists would claim that this is what the Jeffreys prior is doing,
whilst Bayesians might claim that equation (9) defines the underlying
principle.
What we explain below is that it does not matter which of these
explanations you personally prefer, the consequences will turn out to be the same.
\subsubsection{Theory Summary}
Jeffreys derived his priors in order to obtain consistency under
parameter transformation, which can
be considered a scientific requirement.
In a Frequentist sense, the Jeffreys prior can be derived as a density
scaling which approximates the mapping of a parameter to achieve a
Gaussian Likelihood function (see Eqns.~(\ref{eqn:FINdef}) to (\ref{eqn:JeffDef})).
It should be noted that such a mapping is also consistent under parameter transformation.
Generally when performing Likelihood estimation of parameters, we rely to some extent
on the central limit theorem to ensure that for sufficiently large quantities of data
the Likelihood function around the optima will be approximately multi-dimensional Gaussian.
Under these circumstances the derivative terms
and the correlations between them,
can be modelled using a covariance matrix determined from the Minimum Variance Bound (MVB)
in the usual way ~(\ref{eqn:FIMdef}). However, for highly asymmetric likelihood functions
proportionately more data will be needed.
For highly non-linear systems, and in circumstances
where the curse of dimensionality reduces the effective data quantity,
we can not expect this justification to hold. We will show
below that this is the circumstance which we encounter for ANNs, but first
we start with two simpler problems in statistical estimation which
illustrate this.
Rather than using Jeffreys' approximation, if we directly map a parameter
with $z(\theta)$ (\ref{eqn:zdef}), {\bf the Jeffreys
prior density $\pi(z(\theta))$ is not only exact but uniform},
due to origins of Eqn.~(\ref{eqn:JeffDef}). In a Bayesian sense, you may not accept
this origin for Jeffreys priors and would prefer to simply
accept Eqn.~(\ref{eqn:JeffDef}) as already exact. Either way, for this special
definition of a parameter the Likelihood
function can be directly interpreted as a parameter density (\ref{eqn:postdef}).
Under this scheme the Bayesian and Frequentist approaches are directly
comparable and we achieve a form of synthesis,
in all respects except the interpretation of $\pi(\theta)$
as a probability{\footnote{Bayesians have already noted that Jeffreys
priors are often not consistent with Kolmogorov's axioms and therefore ``improper''.}}.
Now that we have this understanding of the relationship between
approaches, it gives us new analysis options. We can exploit either: the observation that the
uninformative prior for the original
parameter is the derivative of $z(\theta)$ (see Approach I below) , or that that the prior for this
mapped parameter is uniform (see Approach II below).
Both insights allow us to avoid the need to evaluate equation (9) but
still compute equation (2).
\subsection{Parameter Uncertainty: Approach I, Binomial and Chi-square}
In what follows, we use the signed square-root of the log likelihood ratio defined above as our mapping function. We then estimate the distribution over a parameter for use in assessment of future computational uncertainty (systematic or `epistemic' errors). That is:
\begin{equation}
p(\theta|\mathbf{X}) \propto \Like(\theta,\mathbf{X})~ |\frac{d z}{d \theta}|,\label{eqn:zpriordef}
\end{equation}
where we now use the $p(\theta|\mathbf{X})$ notation rather than the posterior $\pi^{*}(\theta,\mathbf{X})$ to make it clear that we are using a \emph{specific} mapping of parameter space rather than an explicit Bayesian prior on parameter space, or a general mapped Likelihood function.
We emphasise at this point that, following basic local argument,
this expression is expected to be exact under either a
Frequentist or Bayesian interpretation and applicable to arbitrary likelihood functions, whilst use of equation (9) is not.
\subsubsection{Example: The Estimation of a Variance from a Sample with Known Mean}
We consider a very small sample $n$ of Gaussian i.i.d. data $\{X_{i}:i=1,\ldots n\}$, where the mean $\mu$ is known, and the model is parameterised by the variance thus:
\[
p(x|v) = \frac{1}{\sqrt{2\pi v}}\exp\left(-\frac{1}{2v}(x-\mu)^{2}\right).
\]
The variance parameter has been chosen as an example since it gives a highly skewed Likelihood function, and also to illustrate the non-unique nature of Jeffreys various priors <|cite_start|> (Reference: The selection of prior distributions by formal rules: Abstract Subjectivism has become the dominant philosophical foundation for Bayesian inference. Yet in practice, most Bayesian analyses are performed with so-called “noninformative” priors, that is, priors constructed by some formal rule. We review the plethora of techniques for constructing such priors and discuss some of the practical and philosophical issues that arise when they are used. We give special emphasis to Jeffreys's rules and discuss the evolution of his viewpoint about the interpretation of priors, away from unique representation of ignorance toward the notion that they should be chosen by convention. We conclude that the problems raised by the research on priors chosen by formal rules are serious and may not be dismissed lightly: When sample sizes are small (relative to the number of parameters being estimated), it is dangerous to put faith in any “default” solution; but when asymptotics take over, Jeffreys's rules and their variants remain reasonable choices. We also provide an annotated b...) <|cite_end|>. The Likelihood function is:
\[
\Like_{n}(v) = \frac{1}{(2\pi v)^{\frac{n}{2}}}\exp\left(-\frac{1}{2v}\sum\limits_{i=1}^{n}(X_{i}-\mu)^{2}\right).
\]
We are asked to determine the uncertainty associated with a MLE of variance $\hat{v}$, where\footnote{Note that this is an unbiased estimate since we are using the known mean, rather than the sample mean.}:
\[
\hat{v} \doteq \frac{1}{n} \sum\limits_{i=1}^{n} (X_i - \mu)^2,
\]
and hence the corresponding log-likelihood function is:
\[
- \logL_{n}(v) = - l_{n}(v) = \frac{n}{2}\left[ \frac{\hat{v}}{v} + \ln(2 \pi v)\right].
\]
What we can observe for this system is that once $\hat{v}$ is specified, the Likelihood can be written in a scale-invariant form:
\[
\Like_{n}(v,\hat{v}) \propto \frac{1}{(2\pi)^{\frac{n}{2}}} \cdot \left(\frac{\hat{v}}{v}\right)^{\frac{n}{2}}
\exp \left[ -\frac{n}{2}\left(\frac{\hat{v}}{v}\right)\right] \propto \left(\frac{\hat{v}}{v}\right)^{\frac{n}{2}}\exp \left[ -\frac{n}{2}\left(\frac{\hat{v}}{v}\right)\right].
\]
We can therefore, without loss of generality, restrict ourselves to
consideration of the uncertainty associated with $\hat{v}=1$. We can now compare the theoretical predictions from Bayesian and Frequentist approaches. For this particular example there are two Jeffreys priors in the literature <|cite_start|> (Reference: The selection of prior distributions by formal rules: Abstract Subjectivism has become the dominant philosophical foundation for Bayesian inference. Yet in practice, most Bayesian analyses are performed with so-called “noninformative” priors, that is, priors constructed by some formal rule. We review the plethora of techniques for constructing such priors and discuss some of the practical and philosophical issues that arise when they are used. We give special emphasis to Jeffreys's rules and discuss the evolution of his viewpoint about the interpretation of priors, away from unique representation of ignorance toward the notion that they should be chosen by convention. We conclude that the problems raised by the research on priors chosen by formal rules are serious and may not be dismissed lightly: When sample sizes are small (relative to the number of parameters being estimated), it is dangerous to put faith in any “default” solution; but when asymptotics take over, Jeffreys's rules and their variants remain reasonable choices. We also provide an annotated b...) <|cite_end|>~(see Table \ref{tab:dists}). The first is the non-location or scale invariant prior (see Jeffreys <|cite_start|> (Reference: The Theory of Probability: In the 19th century, both the ideas and methods of the theory of probability, developed since the second half of the 17th century, received new incentives for further progress. These stimuli, highly differing one from another in their essence, were connected with the development of the natural sciences, with practical requirements of society and, also, with the formulation of purely mathematical problems.) <|cite_end|>, \S3.1), which gives $\frac{d\sigma}{\sigma} \propto \frac{d(\sigma^{2})}{\sigma^{2}} = \frac{dv}{v}$, whereas the second is the Jeffreys General Rule prior <|cite_start|> (Reference: An Invariant Form for the Prior Probability in Estimation Problems: It is shown that a certain differential form depending on the values of the parameters in a law of chance is invariant for all transformations of the parameters when the law is differentiable with regard to all parameters. For laws containing a location and a scale parameter a form with a somewhat restricted type of invariance is found even when the law is not everywhere differentiable with regard to the parameters. This form has the properties required to give a general rule for stating the prior probability in a large class of estimation problems.) <|cite_end|>\footnote{To be precise, this is the General Rule prior when you take the model to be that with two parameters, $\Like(\sigma,\mu,\mathbf{X})$, where you compute the determinant of the Fisher Information matrix to obtain the prior. The fact that the General Rule itself gives a \emph{different} answer if you fix one parameter and compute just the Fisher Information function is the specific example considered by Jeffreys in the 1961 edition of <|cite_start|> (Reference: The Theory of Probability: In the 19th century, both the ideas and methods of the theory of probability, developed since the second half of the 17th century, received new incentives for further progress. These stimuli, highly differing one from another in their essence, were connected with the development of the natural sciences, with practical requirements of society and, also, with the formulation of purely mathematical problems.) <|cite_end|> | [
"<|reference_start|> The Concrete Distribution: A Continuous Relaxation of Discrete Random Variables: The reparameterization trick enables optimizing large scale stochastic computation graphs via gradient descent. The essence of the trick is to refactor each stochastic node into a differentiable function of its parameters and a random variable with fixed distribution. After refactoring, the gradients of the loss propagated by the chain rule through the graph are low variance unbiased estimators of the gradients of the expected loss. While many continuous random variables have such reparameterizations, discrete random variables lack useful reparameterizations due to the discontinuous nature of discrete states. In this work we introduce Concrete random variables---continuous relaxations of discrete random variables. The Concrete distribution is a new family of distributions with closed form densities and a simple reparameterization. Whenever a discrete stochastic node of a computation graph can be refactored into a one-hot bit representation that is treated continuously, Concrete stochastic nodes can be used with automatic differentiation to produce low-variance biased gradients of objectives (including objectives that depend on the log-probability of latent stochastic nodes) on the corresponding discrete graph. We demonstrate the effectiveness of Concrete relaxations on density estimation and structured prediction tasks using neural networks. <|reference_end|>",
"<|reference_start|> Network information criterion-determining the number of hidden units for an artificial neural network model: The problem of model selection, or determination of the number of hidden units, can be approached statistically, by generalizing Akaike's information criterion (AIC) to be applicable to unfaithful (i.e., unrealizable) models with general loss criteria including regularization terms. The relation between the training error and the generalization error is studied in terms of the number of the training examples and the complexity of a network which reduces to the number of parameters in the ordinary statistical theory of AIC. This relation leads to a new network information criterion which is useful for selecting the optimal network model based on a given training set. <|reference_end|>",
"<|reference_start|> Normal likelihood functions: <|reference_end|>",
"<|reference_start|> The selection of prior distributions by formal rules: Abstract Subjectivism has become the dominant philosophical foundation for Bayesian inference. Yet in practice, most Bayesian analyses are performed with so-called “noninformative” priors, that is, priors constructed by some formal rule. We review the plethora of techniques for constructing such priors and discuss some of the practical and philosophical issues that arise when they are used. We give special emphasis to Jeffreys's rules and discuss the evolution of his viewpoint about the interpretation of priors, away from unique representation of ignorance toward the notion that they should be chosen by convention. We conclude that the problems raised by the research on priors chosen by formal rules are serious and may not be dismissed lightly: When sample sizes are small (relative to the number of parameters being estimated), it is dangerous to put faith in any “default” solution; but when asymptotics take over, Jeffreys's rules and their variants remain reasonable choices. We also provide an annotated b... <|reference_end|>"
] | [
16,
20,
43,
50
] | {"<|cite_1|>": "ss-1674413", "<|cite_4|>": "ss-1803542", "<|cite_5|>": "ss-694754", "<|cite_6|>": "ss-1346499", "<|cite_7|>": "ss-1219805", "<|multi_cite_8_1|>": "ss-1279275", "<|multi_cite_8_2|>": "ss-1803543", "<|cite_10|>": "ss-1054407", "<|multi_cite_12_1|>": "ss-1054407", "<|cite_13|>": "ss-771393", "<|cite_14|>": "ss-797835", "<|multi_cite_15_1|>": "ss-933279", "<|multi_cite_15_2|>": "arxiv-78051", "<|multi_cite_15_3|>": "ss-1166400", "<|cite_16|>": "ss-1079706", "<|cite_17|>": "ss-1254667", "<|cite_18|>": "arxiv-109215", "<|cite_19|>": "ss-1803544", "<|cite_20|>": "ss-1554889", "<|cite_21|>": "ss-1184147", "<|multi_cite_22_1|>": "ss-1803544", "<|multi_cite_22_2|>": "ss-1677229", "<|cite_23|>": "ss-1025972", "<|multi_cite_24_1|>": "ss-1803545", "<|multi_cite_25_1|>": "ss-1803546", "<|multi_cite_25_2|>": "ss-1535482", "<|cite_26|>": "ss-1803547", "<|cite_27|>": "ss-1803548", "<|cite_28|>": "ss-1803545", "<|cite_29|>": "ss-1803545", "<|cite_30|>": "ss-1803546", "<|cite_31|>": "ss-1025972", "<|cite_32|>": "ss-1803549", "<|cite_33|>": "ss-1020953", "<|cite_34|>": "ss-1803546", "<|cite_35|>": "ss-1803550", "<|cite_36|>": "ss-1803545", "<|cite_37|>": "ss-1803551", "<|multi_cite_38_1|>": "ss-1803552", "<|multi_cite_38_2|>": "ss-1803553", "<|cite_39|>": "ss-1803545", "<|cite_40|>": "ss-1554889", "<|cite_41|>": "ss-2296422", "<|cite_42|>": "ss-1803554", "<|multi_cite_43_1|>": "ss-1803554", "<|multi_cite_43_2|>": "ss-1803555", "<|multi_cite_44_1|>": "ss-771474", "<|multi_cite_44_2|>": "ss-771475", "<|cite_45|>": "ss-1803556", "<|cite_46|>": "ss-1535482", "<|cite_47|>": "ss-1535482", "<|cite_48|>": "ss-1184147", "<|cite_49|>": "ss-1025972", "<|cite_50|>": "ss-1184147", "<|cite_51|>": "ss-1803553"} |
1906.08099 | <|paper_start|> Title: Neuromorphic Liquid Marbles With Aqueous Carbon Nanotube Cores
Abstract: Neuromorphic Liquid Marbles With Aqueous Carbon Nanotube Cores: Neuromorphic computing devices attempt to emulate features of biological nervous systems through mimicking the properties of synapses, towards implementing the emergent properties of their counterparts, such as learning. Inspired by recent advances in the utilisation of liquid marbles (microlitre quantities of fluid coated in hydrophobic powder) for the creation of unconventional computing devices, we describe the development of liquid marbles with neuromorphic properties through the use of copper coatings and 1.0 mg/ml carbon nanotube-containing fluid cores. Experimentation was performed through sandwiching the marbles between two cup-style electrodes and stimulating them with repeated DC pulses at 3.0 V. Our results demonstrate that `entrainment' of a carbon nanotube filled-copper liquid marble via periodic pulses can cause their electrical resistance to rapidly switch between high to low resistance profiles, upon inverting the polarity of stimulation: the reduction in resistance between high and low profiles was approximately 88\% after two rounds of entrainment. This effect was found to be reversible through reversion to the original stimulus polarity and was strengthened by repeated experimentation, as evidenced by a mean reduction in time to switching onset of 43\%. These effects were not replicated in nanotube solutions not bound inside liquid marbles. Our electrical characterisation also reveals that nanotube-filled liquid marbles exhibit pinched loop hysteresis IV profiles consistent with the description of memristors. We conclude by discussing the applications of this technology to the development of unconventional computing devices and the study of emergent characteristics in biological neural tissue.
Introduction
Computation is a ubiquitous property of natural matter that, through a universal and objective language, will unite the sciences. More generally, physical systems may be applied to mathematical problems to create machines and computers. Complex systems may be correspondingly abstracted in algorithmic terms in order to describe phenomena that have traditionally evaded the grasp of understanding, such as complexity arising from biological sensorial-actuation networks, through which phenomena such as `intelligence' are hypothesised to emerge <|cite_start|> (Reference: Neural cytoskeleton capabilities for learning and memory: ) <|cite_end|> <|cite_start|> (Reference: On the role of the plasmodial cytoskeleton in facilitating intelligent behaviour in slime mould Physarum polycephalum: The plasmodium of slime mould Physarum polycephalum behaves as an amorphous reaction-diffusion computing substrate and is capable of apparently intelligent behaviour. But how does intelligence emerge in an acellular organism? Through a range of laboratory experiments, we visualise the plasmodial cytoskeleton, a ubiquitous cellular protein scaffold whose functions are manifold and essential to life, and discuss its putative role as a network for transducing, transmitting and structuring data streams within the plasmodium. Through a range of computer modelling techniques, we demonstrate how emergent behaviour, and hence computational intelligence, may occur in cytoskeletal communications networks. Specifically, we model the topology of both the actin and tubulin cytoskeletal networks and discuss how computation may occur therein. Furthermore, we present bespoke cellular automata and particle swarm models for the computational process within the cytoskeleton and observe the incidence of emergent patterns in both. Our work grants unique insight into the origins of natural intelligence; the results presented here are therefore readily transferable to the fields of natural computation, cell biology and biomedical science. We conclude by discussing how our results may alter our biological, computational and philosophical understanding of intelligence and consciousness.) <|cite_end|> <|cite_start|> (Reference: Actin automata: Phenomenology and localizations: Actin is a globular protein which forms long filaments in the eukaryotic cytoskeleton, whose roles in cell function include structural support, contractile activity to intracellular signalling. We model actin filaments as two chains of one-dimensional binary-state semi-totalistic automaton arrays to describe hypothetical signalling events therein. Each node of the actin automaton takes state `0' (resting) or `1' (excited) and updates its state in discrete time depending on its neighbour's states. We analyse the complete rule space of actin automata using integral characteristics of space-time configurations generated by these rules and compute state transition rules that support travelling and mobile localizations. Approaches towards selection of the localisation supporting rules using the global characteristics are outlined. We find that some properties of actin automata rules may be predicted using Shannon entropy, activity and incoherence of excitation between the polymer chains. We also show that it is possible to infer whether a given rule supports travelling or stationary localizations by looking at ratios of excited neighbours are essential for generations of the localizations. We conclude by applying biomolecular hypotheses to this model and discuss the significance of our findings in context with cell signalling and emergent behaviour in cellular computation.) <|cite_end|> <|cite_start|> (Reference: The Physarum polycephalum actin network: formalisation, topology and morphological correlates with computational ability: The plasmodial form of slime mould Physarum polycephalum is a macroscopic acellular organism that is capable of apparently intelligent behaviour, yet it lacks any features usually associated with intelligence. In this investigation, we study the morphology of the plasmodial actin cytoskeleton and formalise its network topology in efforts to correlate cytoskeletal morphology with slime mould computational abilities. The plasmodial actin network is a highly abundant, complex structure which links the functional components of the cell, whose topology may be approximated with a range of proximity graphs, depending on the physiological and environmental conditions within the plasmodium. Its topology is highly dynamical and is likely to rapidly alter in response to environmental stimuli to maximise network efficiency. We conclude by discussing the nature of the computational process in organic networks.) <|cite_end|> <|cite_start|> (Reference: The Cytoskeleton as a Nanoscale Information Processor: Electrical Properties and an Actin-Microtubule Network Model: ) <|cite_end|> <|cite_start|> (Reference: Ultimate Computing Biomolecular Consciouness And Nano Technology: ) <|cite_end|> <|cite_start|> (Reference: Cytoskeletal logic: a model for molecular computation via Boolean operations in microtubules and microtubule-associated proteins.: ) <|cite_end|>, even in organisms that do not possess nervous systems <|cite_start|> (Reference: Towards a Physarum learning chip: ) <|cite_end|>. This application of computing concepts and development of experimental devices therein encompasses the field of `unconventional computing'.
A neuromorphic characteristic of an engineered system is so named if it mimics the structure or functionality of a component\slash multiple components of the Metazoan nervous system. Typically, this will involve attempts to replicate the phenomenon of synaptic plasticity: self-modulation of the excitability of neuron-neuron junctions (synapses), towards replicating state retention (`learning') via a process of entrainment with graduated input (`neuromodulation'). Neuromorphic devices are worthy of research attention as an unconventional computing paradigm, due to certain features of their biological counterparts --- such as massive parallelism, emergence, and low energy consumption --- being highly desirable to emulate.
This paper aims to create Neuromorphic computing devices from Liquid marbles (LMs). LMs are spherical microlitre quantities of fluid with a superhydrophobic particulate coating, which can range in size between tens and thousands of micrometres in diameter <|cite_start|> (Reference: Magnetically Driven Manipulation of Nonmagnetic Liquid Marbles: Billiards with Liquid Marbles: Liquid marbles are droplets encapsulated by a layer of hydrophobic nanoparticles and have been extensively employed in digital microfluidics and lab-on-a-chip systems in recent years. In this study, magnetic liquid marbles were used to manipulate nonmagnetic liquid marbles. To achieve this purpose, a ferrofluid liquid marble (FLM) was employed and attracted toward an electromagnet, resulting in an impulse to a water liquid marble (WLM) on its way to the electromagnet. It was observed that the manipulation of the WLM by the FLM was similar to the collision of billiard balls except that the liquid marbles exhibited an inelastic collision. Taking the FLM as the projectile ball and the WLM as the other target balls, one can adjust the displacement and direction of the WLM precisely, similar to an expert billiard player. Firstly, the WLM displacement can be adjusted by altering the liquid marble volumes, the initial distances from the electromagnet, and the coil current. Secondly, the WLM direction can be adjusted by changing the position of the WLM relative to the connecting line between the FLM center and the electromagnet. Results show that when the FLM or WLM volume increases by five times, the WLM shooting distance approximately increases by 200% and decreases by 75%, respectively.) <|cite_end|> <|cite_start|> (Reference: Liquid marbles: principles and applications: The ability of particles to adhere to a fluid–fluid interface can stabilize the formation of an emulsion. When the encapsulated fluid is a liquid and the fluid in which it is immersed is air, the object formed is called a “Liquid Marble”. Here we discuss how liquid marbles can be created, their fundamental properties and their transport and potential uses. We show how they arise naturally as an insect waste disposal system, from impact of droplets on powders and on hydrophobic soil, and in the mixing of particulate containing liquids. Our principal aim is to review research on macroscopic single marbles and their potential uses in sensors and droplet microfluidics. However, we also illustrate the similarity between liquid marbles, Pickering emulsions and “Dry Water”, and the potential application of assemblies of liquid marbles within cosmetics and pharmaceutical formulations. Finally, we discuss how modifying the surface structure of particles and providing heterogeneous surface chemistry on particles (e.g. Janus particles) might provide new types of liquid marbles and applications.) <|cite_end|> <|cite_start|> (Reference: Liquid marbles: topical context within soft matter and recent progress: The study of particle stabilized interfaces has a long history in terms of emulsions, foams and related dry powders. The same underlying interfacial energy principles also allow hydrophobic particles to encapsulate individual droplets into a stable form as individual macroscopic objects, which have recently been called "Liquid Marbles". Here we discuss conceptual similarities to superhydrophobic surfaces, capillary origami, slippery liquids-infused porous surfaces (SLIPS) and Leidenfrost droplets. We provide a review of recent progress on liquid marbles, since our earlier Emerging Area article (Soft Matter, 2011, 7, 5473-5481), and speculate on possible future directions from new liquid-infused liquid marbles to microarray applications. We highlight a range of properties of liquid marbles and describe applications including detecting changes in physical properties (e.g. pH, UV, NIR, temperature), use for gas sensing, synthesis of compounds/composites, blood typing and cell culture.) <|cite_end|> <|cite_start|> (Reference: Liquid marbles, elastic nonstick droplets: From minireactors to self-propulsion: Liquid marbles are nonstick droplets wrapped by micro- or nanometrically scaled colloidal particles, representing a platform for a variety of chemical, biological, and microfluidics applications. Liquid marbles demonstrate elastic properties and do not coalesce when bounced or pressed. The effective surface tension and Young modulus of liquid marbles are discussed. Physical sources of the elasticity of liquid marbles are considered. Liquids and powders used for the fabrication of liquid marbles are surveyed. This feature article reviews properties and applications of liquid marbles. Liquid marbles demonstrate potential as microreactors, microcontainers for growing micro-organisms and cells, and microfluidics devices. The Marangoni-flow-driven self-propulsion of marbles supported by liquids is addressed.) <|cite_end|>. These systems exhibit novel characteristics such as low coefficients of friction <|cite_start|> (Reference: Properties of liquid marbles: Liquid marbles are liquid drops made non-wetting by the use of a powder which coats them. Because of the absence of a contact line, quick motions without leakage of small amounts of liquid are allowed, which can be of interest in microfluidic applications. After characterizing the static liquid marble, we focus on its properties and study experimentally the viscous motion of liquid marbles. Then, we describe qualitatively possible ways for putting marbles into motion and quantify the robustness of this object.) <|cite_end|> <|cite_start|> (Reference: On the mechanism of floating and sliding of liquid marbles: The mechanisms of floating and sliding of liquid marbles are studied. Liquid marbles containing CaCl(2) and marbles containing NaOH water solutions float on water containing Na(2)CO(3) and an alcoholic solution of phenolphthalein with no chemical reaction. Sliding of liquid marbles, consisting of NaOH water solutions, on polymer substrates coated with phenolphthalein is studied as well. No chemical reaction is observed. These observations supply direct experimental evidence for the suggestion that interfaces are separated by an air layer when marbles roll on solid substrates. It is concluded that a liquid marble rests on hydrophobic particles coating the liquid. In contrast, drops containing an NaOH water solution sliding on superhydrophobic surfaces coated with phenolphthalein leave a colored trace. The mechanism of low-friction sliding of drops deposited on superhydrophobic surfaces and liquid marbles turns out to be quite different: there is no direct contact between liquid and solid in the case of marbles' motion.) <|cite_end|> <|cite_start|> (Reference: On the nature of the friction between nonstick droplets and solid substrates: The lateral rolling of nonstick water and glycerol droplets deposited on solid substrates inclined at the sliding angles was studied. Droplets deposited on lotuslike water-repellent surfaces and liquid marbles (powder-coated droplets) deposited on solid substrates demonstrated similar behavior. The motion of both droplets and marbles featured friction that is far from Amonton type. The sliding angles of nonstick drops are sensitive to the "history" of a drop on the substrate. The friction of glycerol marbles is governed by viscous dissipation, whereas the rolling of water marbles is dictated by adhesion forces acting within the contact area.) <|cite_end|>, which have been exploited by nature <|cite_start|> (Reference: How aphids lose their marbles: Insects provide examples of many cunning stratagems to cope with the challenges of living in a world dominated by surface forces. Despite being the current masters of the land environment, they are at constant risk of being entrapped in liquids, which they prevent by having waxy and hairy surfaces. The problem is particularly acute in an enclosed space, such as a plant gall. Using secreted wax to efficiently parcel and transport their own excrement, aphids were able to solve this problem 200 Myr ago. Here, we report on the physical and physiological significance of this ingenious solution. The secreted powdery wax has three distinct roles: (i) it is hydrophobic, (ii) it creates a microscopically rough inner gall surface made of weakly compacted wax needles making the gall ultra–hydrophobic, and (iii) it coats the honeydew droplets converting them into liquid marbles, that can be rapidly and efficiently moved.) <|cite_end|> <|cite_start|> (Reference: Liquid Marbles in Nature: Craft of Aphids for Survival: Some aphids that live in the leaf galls of the host plant are known to fabricate liquid marbles consisting of honeydew and wax particles as an inner liquid and a stabilizer, respectively. In this study, the liquid marbles fabricated by the galling aphids, Eriosoma moriokense, were extensively characterized with respect to size and size distribution, shape, nanomorphology, liquid/solid weight ratio, and chemical compositions. The stereo microscopy studies confirmed that the liquid marbles have a near-spherical morphology and that the number-average diameter was 368 ± 152 μm, which is 1 order of magnitude smaller than the capillary length of the honeydew. The field emission scanning electron microscopy studies indicated that micrometer-sized wax particles with fiber- and dumpling-like shapes coated the honeydew droplets, which rendered the liquid marbles hydrophobic and nonwetting. Furthermore, the highly magnified scanning electron microscopy images confirmed that the wax particles were formed with assemblage of submicrometer-sized daughter fibers. The contact angle measurements indicated that the wax was intrinsically hydrophobic and that the liquid marbles were stabilized by the wax particles in the Cassie-Baxter model. The weight ratio of the honeydew and the wax particles was determined to be 96/4, and the honeydew consisted of 19 wt % nonvolatile components and 81 wt % water. The 1H nuclear magnetic resonance, Fourier transform infrared spectroscopy, and mass spectroscopy studies confirmed that the wax mainly consisted of triglycerides and that the honeydew mainly consisted of saccharides (glucose and fructose) and ribitol. The atomic force microscopy studies confirmed that honeydew is sticky in nature.) <|cite_end|>. It has been demonstrated that liquid marbles have myriad uses ranging from micro-bioreactors <|cite_start|> (Reference: Liquid marbles: Properties and applications: ) <|cite_end|> <|cite_start|> (Reference: Liquid marbles, elastic nonstick droplets: From minireactors to self-propulsion: Liquid marbles are nonstick droplets wrapped by micro- or nanometrically scaled colloidal particles, representing a platform for a variety of chemical, biological, and microfluidics applications. Liquid marbles demonstrate elastic properties and do not coalesce when bounced or pressed. The effective surface tension and Young modulus of liquid marbles are discussed. Physical sources of the elasticity of liquid marbles are considered. Liquids and powders used for the fabrication of liquid marbles are surveyed. This feature article reviews properties and applications of liquid marbles. Liquid marbles demonstrate potential as microreactors, microcontainers for growing micro-organisms and cells, and microfluidics devices. The Marangoni-flow-driven self-propulsion of marbles supported by liquids is addressed.) <|cite_end|> <|cite_start|> (Reference: Manipulation of liquid marbles: ) <|cite_end|> <|cite_start|> (Reference: The potential of liquid marbles for biomedical applications: A critical review: Liquid marbles (LM) are freestanding droplets covered by micro/nanoparticles with hydrophobic/hydrophilic properties, which can be manipulated as a soft solid. The phenomenon that generates these soft structures is regarded as a different method to generate a superhydrophobic behavior in the liquid/solid interface without modifying the surface. Several applications for the LM have been reported in very different fields, however the developments for biomedical applications are very recent. At first, the LM properties are reviewed, namely shell structure, LM shape, evaporation, floatability and robustness. The different strategies for LM manipulation are also described, which make use of magnetic, electrostatic and gravitational forces, ultraviolet and infrared radiation, and approaches that induce LM self‐propulsion. Then, very distinctive applications for LM in the biomedical field are presented, namely for diagnostic assays, cell culture, drug screening and cryopreservation of mammalian cells. Finally, a critical outlook about the unexplored potential of LM for biomedical applications is presented, suggesting possible advances on this emergent scientific area.) <|cite_end|> to gas biosensors <|cite_start|> (Reference: Liquid marble for gas sensing: The porous and superhydrophobic shell of a liquid marble prevents contact of its liquid core with outside surfaces, but allows gas transport. Liquid marble can therefore be used to sense gas or emit gas. Liquid marbles loaded with different indicators can simultaneously sense different gases via different mechanisms.) <|cite_end|> <|cite_start|> (Reference: Porous liquid marble shell offers possibilities for gas detection and gas reactions: ) <|cite_end|> to unconventional computing media <|cite_start|> (Reference: Liquid Marble Interaction Gate for Collision-Based Computing: Liquid marbles are microlitre droplets of liquid, encapsulated by self-organised hydrophobic particles at the liquid/air interface. They offer an efficient approach for manipulating liquid droplets and compartmentalising reactions in droplets. Digital fluidic devices employing liquid marbles might benefit from having embedded computing circuits without electronics and moving mechanical parts (apart from the marbles). We present an experimental implementation of a collision gate with liquid marbles. Mechanics of the gate follows principles of Margolus' soft-sphere collision gate. Boolean values of the inputs are given by the absence (FALSE) or presence (TRUE) of a liquid marble. There are three outputs: two outputs are trajectories of undisturbed marbles (they only report TRUE when just one marble is present at one of the inputs), one output is represented by trajectories of colliding marbles (when two marbles collide they lose their horizontal momentum and fall), this output reports TRUE only when two marbles are present at inputs. Thus the gate implements AND and AND-NOT logical functions. We speculate that by merging trajectories representing AND-NOT output into a single channel one can produce a one-bit half-adder. Potential design of a one-bit full-adder is discussed, and the synthesis of both a pure nickel metal and hybrid nickel/polymer liquid marble is reported.) <|cite_end|> <|cite_start|> (Reference: Liquid Marble Actuator for Microfluidic Logic Systems: ) <|cite_end|>. Our laboratory has developed LM devices that are capable of implementing computation through a variety of non-standard logics <|cite_start|> (Reference: Liquid Marble Interaction Gate for Collision-Based Computing: Liquid marbles are microlitre droplets of liquid, encapsulated by self-organised hydrophobic particles at the liquid/air interface. They offer an efficient approach for manipulating liquid droplets and compartmentalising reactions in droplets. Digital fluidic devices employing liquid marbles might benefit from having embedded computing circuits without electronics and moving mechanical parts (apart from the marbles). We present an experimental implementation of a collision gate with liquid marbles. Mechanics of the gate follows principles of Margolus' soft-sphere collision gate. Boolean values of the inputs are given by the absence (FALSE) or presence (TRUE) of a liquid marble. There are three outputs: two outputs are trajectories of undisturbed marbles (they only report TRUE when just one marble is present at one of the inputs), one output is represented by trajectories of colliding marbles (when two marbles collide they lose their horizontal momentum and fall), this output reports TRUE only when two marbles are present at inputs. Thus the gate implements AND and AND-NOT logical functions. We speculate that by merging trajectories representing AND-NOT output into a single channel one can produce a one-bit half-adder. Potential design of a one-bit full-adder is discussed, and the synthesis of both a pure nickel metal and hybrid nickel/polymer liquid marble is reported.) <|cite_end|> <|cite_start|> (Reference: Liquid Marble Actuator for Microfluidic Logic Systems: ) <|cite_end|> <|cite_start|> (Reference: Mechanical Sequential Counting with Liquid Marbles: ) <|cite_end|> where the LMs are considered as data or otherwise, to contain data (i.e.\ chemical reactants), which may interact with other LMs via collisions that will result in data translation or transfer via ricochets or coalescence. Towards these goals, we (and others) have also examined LM dynamics to enhance their usefulness for these purposes, e.g.\ evaporation <|cite_start|> (Reference: Evaporation, Lifetime, and Robustness Studies of Liquid Marbles for Collision-Based Computing: Liquid marbles (LMs) have recently attracted interest for use as cargo carriers in digital microfluidics and have successfully been implemented as signal carriers in collision-based unconventional computing circuits. Both application domains require LMs to roll over substantial distances and to survive a certain number of collisions without degrading. To evaluate the lifetime of LMs being subjected to movement and impact stresses, we have selected four types of coating to investigate: polytetrafluoroethylene (PTFE), ultrahigh density polyethylene (PE), Ni, and a mixture of Ni with PE (Ni-PE). Hierarchies of robustness have been constructed which showed that pure PE LMs survived the longest when stationary and in motion. Pure PTFE LMs were shown to be the least resilient to multiple impacts. The PTFE coating provided minimal protection against evaporative losses for small LM volumes (2 and 5 μL) however, larger LMs (10 μL) were shown to have good evaporative stabilities when stationary. Conversely, PE LMs showed a remarkable ability to withstand multiple impacts and were also stable when considering just passive evaporation. Hybrid Ni-PE LMs exhibited more resilience to multiple impacts compared to Ni LMs. Thus, when designing LM devices, it is paramount to determine impact pathways and select appropriate coating materials.) <|cite_end|> <|cite_start|> (Reference: Evaporation Rate of Graphite Liquid Marbles: Comparison with Water Droplets: Liquid marbles are liquid drops made completely nonwetting by encapsulating the drop with a hydrophobic powder. The absence of contact with the substrate avoids contamination problems and produces high marble displacement velocities. Liquid marbles behave as microreservoirs of liquids able to move without any leakage and are promising candidates to be applied in biomedical and genetic analysis where 2D microfluidics and lab-on-a-chip methods are used. The lifetime of a liquid marble depends on the chemical nature and particle size of the hydrophobic powder as well as the liquid used to form it. There is a need for chemically inert liquid marbles, which can be used over sufficiently long periods for industrial applications. In this work, we successfully synthesized graphite liquid marbles for the first time by encapsulating graphite micropowder on water droplets and determined their evaporation periods and useful lifetimes in constant relative humidity and temperature conditions in a closed chamber. The evaporation rates of graphite liquid marbles were compared with the rates of pure water droplets in the same conditions, and it was found that they had nearly twice the lifetime of pure water droplets. The use of chemically inert graphite particles having electrical conductivity and dry lubrication properties to form a liquid marble may be a starting point for their successful use in microfluidics, genetic analysis, antifouling, wear-free micromachine, electromechanical actuator, and valve applications.) <|cite_end|> and ballistic interactions <|cite_start|> (Reference: Mapping outcomes of liquid marble collisions: Liquid marbles (LMs) have many promising roles in the ongoing development of microfluidics, microreactors, bioreactors, and unconventional computing. In many of these applications, the coalescence of two LMs is either required or actively discouraged, therefore it is important to study liquid marble collisions and establish parameters which enable the desired collision outcome. Recent reports on LM coalescence have focused on either two mobile LMs colliding, or an accelerating LM hitting a sessile LM with a backstop. A further possible scenario is the impact of a mobile LM against a non-supported static LM. This paper investigates such a collision, using high-speed videography for single-frame analysis. Multiple collisions were undertaken whilst varying the modified Weber number (We*) and offset ratios (X*). Parameter ranges of 1.0 < We* < 1.4 and 0.0 < X* < 0.1, resulted in a coalescence rate of approximately 50%. Whereas, parameter ranges X* > 0.25, and We* < 0.95 or We* > 1.55 resulted in 100% non-coalescence. Additionally, observations of LMs moving above a threshold velocity of 0.6 m s-1 have revealed a new and unusual deformation. Comparisons of the outcome of collisions whilst varying both the LM volume and the powder grain size have also been made, revealing a strong link. The results of this work provide a deeper understanding of LM coalescence, allowing improved control when designing future collision experiments.) <|cite_end|> <|cite_start|> (Reference: Coalescence of armored interface under impact: Armored interfaces refer to fluid interfaces on which a compact monolayer of particles is adsorbed. In this paper, we probe their robustness under impact. For such an investigation, the impact of a drop (covered or not by particles) on a flat armored interface is considered. Two regimes are observed: small drops impacting at low velocities do not coalesce, while bigger drops falling at higher velocities lead to coalescence. The coalescence which occurs when the impacting drop has just reached its maximum extension directly results from the formation of bare regions within the armor. We therefore propose a geometric criterion to describe this transition. This simple modeling is able to capture the dependence of the measured velocity threshold with particle size and drop diameter. The additional robustness experienced by double armors (both drop and puddle covered) results in an increase of the measured velocity threshold, which is quantitatively predicted.) <|cite_end|>. The current work is therefore presented as a route towards developing microlitre-quantity three-dimensional ballistic-chemical reactors, that exhibit neuromorphic properties and may hence be used as unconventional computing media.
To engineer a neuromorphic effect in our LMs, the core chosen was an aqueous dispersion of carbon nanotubes (CNTs). In 2001 Cui {\it et al.} <|cite_start|> (Reference: Carbon nanotube memory devices of high charge storage stability: Molecular memory devices with semiconducting single-walled carbon nanotubes constituting a channel of 150 nm in length are described. Data storage is achieved by sweeping gate voltages in the range of 3 V, associated with a storage stability of more than 12 days at room temperature. By annealing in air or controlled oxygen plasma exposure, efficient switching devices could be obtained from thin nanotube bundles that originally showed only a small gate dependence of conductance.) <|cite_end|> experimentally demonstrated that single walled-CNTs can be switched between two conductance states (high-conductance and low-conductance), which differ by more than two orders of magnitude, with a threshold voltage shift of \SI{1.25}{\volt}. Theoretical analysis has shown that CNTs can act as Schottky barrier transistors <|cite_start|> (Reference: Carbon nanotubes as schottky barrier transistors: Field-effect transistors (FETs) made with carbon nanotubes have many attractive features, and are being widely studied as a potential nanoscale successor to silicon FETs. Remarkably, we found that nanotube FETs generally operate by a completely different principle than ordinary Si FETs. Rather than modulate the conductance of the channel, the gate field acts to modulate the tunneling conductance of a Schottky barrier at the contact [1]. As a result, the device performance is determined by completely different factors than in familiar FETs [2-4]. In particular, the nanoscale electric field distribution near the contacts plays a crucial role. As a result, the geometry and workfunction of the contact become as important as more familiar factors like gate-oxide thickness. In addition, there are fundamental differences in the role of Fermi-level pinning at the metal-nanotube contact, compared to ordinary semiconductor interfaces [5].) <|cite_end|> and several patent applications for CNT switching devices have been filed. Regarding progress towards implementations of CNT computing systems, field effect transistors have been described <|cite_start|> (Reference: Molecular electronics with carbon nanotubes: Carbon nanotubes have unique properties that make them a most promising system on which to base molecular electronics. We briefly review the electrical characteristics of carbon nanotubes, and then focus on carbon nanotube field-effect transistors (CNTFETs). Procedures by which hole-transport, electron-transport and ambipolar CNTFETs can be fabricated are presented, and their electrical characteristics are discussed and compared with those of Si MOSFETs. Ways to fabricate arrays of CNTFETs are also demonstrated, and electron and hole CNTFETs are integrated to form complementary logic circuits.) <|cite_end|>, and experimental laboratory evidence suggests that the solid-state switching signatures of CNTs might be due to their relative mechanical movements <|cite_start|> (Reference: Single-walled carbon nanotube based molecular switch tunnel junctions: This article describes two-terminal molecular switch tunnel junctions (MSTJs) which incorporate a semiconducting, single-walled carbon nanotube (SWNT) as the bottom electrode. The nanotube interacts noncovalently with a monolayer of bistable, nondegenerate [2]catenane tetracations, self-organized by their supporting amphiphilic dimyristoylphosphatidyl anions which shield the mechanically switchable tetracations from a two-micrometer wide metallic top electrode. The resulting 0.002 micron 2 area tunnel junction addresses a nanometer wide row of approximately 2000 molecules. Active and remnant current-voltage measurements demonstrated that these devices can be reconfigurably switched and repeatedly cycled between high and low current states under ambient conditions. Control compounds, including a degenerate [2]catenane, were explored in support of the mechanical origin of the switching signature. These SWNT-based MSTJs operate like previously reported silicon-based MSTJs, but differently from similar devices incorporating bottom metal electrodes. The relevance of these results with respect to the choice of electrode materials for molecular electronics devices is discussed.) <|cite_end|>. Carbon nanotube artificial synapses have previously been prototyped separately by K.\ Kim \textit{et al.} <|cite_start|> (Reference: A carbon nanotube synapse with dynamic logic and learning: A carbon nanotube (CNT) synapse emulates a biological synapse with its dynamic logic, learning, and memory functions induced by the interactions between CNTs and hydrogen ions in an electrochemical cell. A circuit of CNT synapses operates with extremely low-energy consumption and could potentially emulate the functions of the neuronal network.) <|cite_end|> and S.\ Kim \textit{et al.} <|cite_start|> (Reference: Pattern recognition using carbon nanotube synaptic transistors with an adjustable weight update protocol: Recent electronic applications require an efficient computing system that can perform data processing with limited energy consumption. Inspired by the massive parallelism of the human brain, a neuromorphic system (hardware neural network) may provide an efficient computing unit to perform such tasks as classification and recognition. However, the implementation of synaptic devices (i.e., the essential building blocks for emulating the functions of biological synapses) remains challenging due to their uncontrollable weight update protocol and corresponding uncertain effects on the operation of the system, which can lead to a bottleneck in the continuous design and optimization. Here, we demonstrate a synaptic transistor based on highly purified, preseparated 99% semiconducting carbon nanotubes, which can provide adjustable weight update linearity and variation margin. The pattern recognition efficacy is validated using a device-to-system level simulation framework. The enlarged margin rather than the linear weight update can enhance the fault tolerance of the recognition system, which improves the recognition accuracy.) <|cite_end|>: the synapse operates via dynamic interactions between CNTs and hydrogen ions in an electrochemical cell integrated in the synapse. Our aims therefore were to capitalise on these properties of CNTs within LMs.
The following paper is structured as follows, first we present our methods for producing neuromorphic LMs, using a copper coating and a CNT-containing solution core. After presenting our results, which detail an electrical characterisation of our LMs, including descriptions of entrainment protocols, we proceed to discuss their design, putative uses and present limitations. <|paper_end|> | [
"<|reference_start|> Magnetically Driven Manipulation of Nonmagnetic Liquid Marbles: Billiards with Liquid Marbles: Liquid marbles are droplets encapsulated by a layer of hydrophobic nanoparticles and have been extensively employed in digital microfluidics and lab-on-a-chip systems in recent years. In this study, magnetic liquid marbles were used to manipulate nonmagnetic liquid marbles. To achieve this purpose, a ferrofluid liquid marble (FLM) was employed and attracted toward an electromagnet, resulting in an impulse to a water liquid marble (WLM) on its way to the electromagnet. It was observed that the manipulation of the WLM by the FLM was similar to the collision of billiard balls except that the liquid marbles exhibited an inelastic collision. Taking the FLM as the projectile ball and the WLM as the other target balls, one can adjust the displacement and direction of the WLM precisely, similar to an expert billiard player. Firstly, the WLM displacement can be adjusted by altering the liquid marble volumes, the initial distances from the electromagnet, and the coil current. Secondly, the WLM direction can be adjusted by changing the position of the WLM relative to the connecting line between the FLM center and the electromagnet. Results show that when the FLM or WLM volume increases by five times, the WLM shooting distance approximately increases by 200% and decreases by 75%, respectively. <|reference_end|>",
"<|reference_start|> Liquid marbles, elastic nonstick droplets: From minireactors to self-propulsion: Liquid marbles are nonstick droplets wrapped by micro- or nanometrically scaled colloidal particles, representing a platform for a variety of chemical, biological, and microfluidics applications. Liquid marbles demonstrate elastic properties and do not coalesce when bounced or pressed. The effective surface tension and Young modulus of liquid marbles are discussed. Physical sources of the elasticity of liquid marbles are considered. Liquids and powders used for the fabrication of liquid marbles are surveyed. This feature article reviews properties and applications of liquid marbles. Liquid marbles demonstrate potential as microreactors, microcontainers for growing micro-organisms and cells, and microfluidics devices. The Marangoni-flow-driven self-propulsion of marbles supported by liquids is addressed. <|reference_end|>",
"<|reference_start|> On the mechanism of floating and sliding of liquid marbles: The mechanisms of floating and sliding of liquid marbles are studied. Liquid marbles containing CaCl(2) and marbles containing NaOH water solutions float on water containing Na(2)CO(3) and an alcoholic solution of phenolphthalein with no chemical reaction. Sliding of liquid marbles, consisting of NaOH water solutions, on polymer substrates coated with phenolphthalein is studied as well. No chemical reaction is observed. These observations supply direct experimental evidence for the suggestion that interfaces are separated by an air layer when marbles roll on solid substrates. It is concluded that a liquid marble rests on hydrophobic particles coating the liquid. In contrast, drops containing an NaOH water solution sliding on superhydrophobic surfaces coated with phenolphthalein leave a colored trace. The mechanism of low-friction sliding of drops deposited on superhydrophobic surfaces and liquid marbles turns out to be quite different: there is no direct contact between liquid and solid in the case of marbles' motion. <|reference_end|>",
"<|reference_start|> On the nature of the friction between nonstick droplets and solid substrates: The lateral rolling of nonstick water and glycerol droplets deposited on solid substrates inclined at the sliding angles was studied. Droplets deposited on lotuslike water-repellent surfaces and liquid marbles (powder-coated droplets) deposited on solid substrates demonstrated similar behavior. The motion of both droplets and marbles featured friction that is far from Amonton type. The sliding angles of nonstick drops are sensitive to the \"history\" of a drop on the substrate. The friction of glycerol marbles is governed by viscous dissipation, whereas the rolling of water marbles is dictated by adhesion forces acting within the contact area. <|reference_end|>"
] | [
8,
11,
13,
14
] | {"<|multi_cite_1_1|>": "ss-1510942", "<|multi_cite_1_2|>": "arxiv-74369", "<|multi_cite_1_3|>": "arxiv-64887", "<|multi_cite_1_4|>": "ss-1510943", "<|multi_cite_1_5|>": "ss-1510944", "<|multi_cite_1_6|>": "ss-1510945", "<|multi_cite_1_7|>": "ss-1510946", "<|cite_2|>": "ss-1510947", "<|multi_cite_3_1|>": "ss-1015941", "<|multi_cite_3_2|>": "ss-1510948", "<|multi_cite_3_3|>": "ss-1510949", "<|multi_cite_3_4|>": "ss-1510950", "<|multi_cite_4_1|>": "ss-1015950", "<|multi_cite_4_2|>": "ss-1510951", "<|multi_cite_4_3|>": "ss-1510952", "<|multi_cite_5_1|>": "ss-1510953", "<|multi_cite_5_2|>": "ss-1510954", "<|multi_cite_6_1|>": "ss-1510955", "<|multi_cite_6_2|>": "ss-1510950", "<|multi_cite_6_3|>": "ss-1510956", "<|multi_cite_6_4|>": "ss-1510957", "<|multi_cite_7_1|>": "ss-1510958", "<|multi_cite_7_2|>": "ss-1015945", "<|multi_cite_8_1|>": "arxiv-132057", "<|multi_cite_8_2|>": "ss-1510959", "<|multi_cite_9_1|>": "arxiv-132057", "<|multi_cite_9_2|>": "ss-1510959", "<|multi_cite_9_3|>": "ss-1510960", "<|multi_cite_10_1|>": "ss-1986851", "<|multi_cite_10_2|>": "ss-1510961", "<|multi_cite_11_1|>": "ss-1986850", "<|multi_cite_11_2|>": "ss-1510962", "<|cite_12|>": "ss-1510963", "<|cite_13|>": "ss-1510964", "<|cite_15|>": "ss-1510965", "<|cite_16|>": "ss-1510966", "<|cite_17|>": "ss-1510967", "<|cite_18|>": "ss-1510968"} |
2010.01272 | <|paper_start|> Title: Towards Interpretable Reasoning over Paragraph Effects in Situation
Abstract: Towards Interpretable Reasoning over Paragraph Effects in Situation: We focus on the task of reasoning over paragraph effects in situation, which requires a model to understand the cause and effect described in a background paragraph, and apply the knowledge to a novel situation. Existing works ignore the complicated reasoning process and solve it with a one-step "black box" model. Inspired by human cognitive processes, in this paper we propose a sequential approach for this task which explicitly models each step of the reasoning process with neural network modules. In particular, five reasoning modules are designed and learned in an end-to-end manner, which leads to a more interpretable model. Experimental results on the ROPES dataset demonstrate the effectiveness and explainability of our proposed approach.
Introduction
As a long-standing fundamental task of natural language processing, machine reading comprehension (MRC) has attracted remarkable attention recently and different MRC datasets have been studied <|cite_start|> (Reference: “ If you can ' t describe what you are doing as a process , you don ' t know what you ' re doing . ”: In today’s competitive market for software developing organisations it is of high importance that businesses are effective in their development projects in order to deliver products with high quality at the right time and at a low cost. One way to enable this effectiveness is to describe software development processes in a coherent way to increase understanding between different target groups affected by the process. Ellmer and Merkl (1996) call the described development process for “organisational memory” and argue that this is highly important for successful software development. In this study, we look at what process notations to use that can facilitate the described process. The study sets out by conducting a literature review of described software processes and notations. Specifically, this is done by reviewing two main questions: (1) what role the described software process serves and (2) what quality attributes are important to the specific organisation in the described software process. With that understanding in mind, we proceeded by reviewing the use of tools for modelling software processes. Based on the reviewed literature, a questionnaire withholding statements about the two questions was created. The empirical data for the study was collected in face to face interviewees with 12 employees working at senior management and senior engineer level at Ericsson (Lindholmen) in Sweden. At the interviews, the questionnaires were used to gather quantitative data and to get more depth to the study qualitative data was also gathered during interviews. The result from our study reveals that main reasons identified for the role of the described process are in coherence with the reviewed literature. ‘Storing organisational knowledge’, ‘discussing improvements’ and ‘communicating knowledge and competence’ are features identified in the literature as well as during the interviews. Furthermore, analyses from our literature study conclude that there is no obvious common standard for process notations and process modelling. This was confirmed in our interview study as more or less none of the interviews had the same view on what process notation to use. Finally, the result from our study also reveals that main attributes for reaching high quality in the described process differs between what we found in the literature study and in the interview study. Literature promotes the importance of clearly defined roles, which was not promoted by our interviewees. On the contrary, our study indicates a need for presenting clear and understandable deliverables in the described software processes, which is not promoted in the literature.) <|cite_end|> <|cite_start|> (Reference: {DROP: 본 연구는 비만인과 정상인 그룹을 대상으로 drop 착지 동작 시 발생되는 수지 반발력과 운동학적 변인들을(하지 관절 각도, 각속도, 선속도) 분석하여 정상인에 비해 비만인 그룹에서 발생되는 보상행동을 분석하는데 그 목적이 있었다. 비만인 그룹은 정상인 그룹에 비해 drop 착지 동작 시 최대 수직 반발력이 크게 작용하였을 뿐만 아니라 최대 하방 가속도 역시 크게 나타났다. 이에 반해 비만인 그룹은 정상인 그룹에 비해 착지 동작 시 하지 관절 신전/굴곡 각도의 범위가 작았다. 또한 비만인 그룹은 P1에서 하지 관절 각속도와 관절의 선속도에서 정상인 그룹에 비하여 낮은 수치를 나타내었다. 이는 착지로 인해 인체에 전해지는 충격력을 흡수하는 하지 관절 기능성이 정상인 그룹에 비해 비만인 그룹이 낮았기 때문에 비만인 그룹에서는 정상인 그룹에 비해 경성 착지가 발생한 것으로 판단할 수 있다. 이와 같은 근거를 기준으로 비만인 그룹의 경성 착지로 인해 발생되는 보상행동은 다음과 같다. 발이 지면에 닿는 순간에서 무릎 최대 굴곡 순간까지 힙, 무릎관절 각속도를 빠르게 하였다. 그리고 무릎 최대 굴곡 순간에서 무릎 최대 신전 순간까지는 정상인 그룹에 비해 느린 힙, 무릎, 발목 관절각속도를 보였다. 그리고 어깨, 힙, 무릎 관절의 상방향 수직 선속도에서도 비만인 그룹이 정상인 그룹에 비해 느린 수치를 보였다. 그러므로 착지 동작 수행 시 비만인들은 무릎 최대 굴곡 순간 이후에 착지로 인한 운동학적 보상행동 요인들이 발생되는 경향을 본 연구를 통하여 알 수 있었다.) <|cite_end|> <|cite_start|> (Reference: Q: 读钱解鲁──关于《阿Q正传》刘玉凯读钱钟书先生的《管锥编》,试诠解鲁迅小说《阿Q正传》的文化内涵,洞若观火,兴味无穷。今撮录随记数端,以奉同嗜者共享。钱先生的文学研究,既能高瞻周览,又能剖分见微。他对于一些“陈言加空话”的大而无当之论不以为然。他说,...) <|cite_end|> <|cite_start|> (Reference: Transport properties of heterostructures composed of Mo(S,Se)$_2$ on \emph{h}-BN: ) <|cite_end|>, among which reasoning over paragraph effects in situation (ROPES for short) is a very challenging scenario that needs to understand knowledge from a background paragraph and apply it to answer questions in a novel situation. Table~\ref{tab:example} shows an example of the ROPES dataset <|cite_start|> (Reference: Reasoning Over Paragraph Effects in Situations: A key component of successfully reading a passage of text is the ability to apply knowledge gained from the passage to a new situation. In order to facilitate progress on this kind of reading, we present ROPES, a challenging benchmark for reading comprehension targeting Reasoning Over Paragraph Effects in Situations. We target expository language describing causes and effects (e.g., "animal pollinators increase efficiency of fertilization in flowers"), as they have clear implications for new situations. A system is presented a background passage containing at least one of these relations, a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the background passage in the context of the situation. We collect background passages from science textbooks and Wikipedia that contain such phenomena, and ask crowd workers to author situations, questions, and answers, resulting in a 14,322 question dataset. We analyze the challenges of this task and evaluate the performance of state-of-the-art reading comprehension models. The best model performs only slightly better than randomly guessing an answer of the correct type, at 61.6% F1, well below the human performance of 89.0%.) <|cite_end|>, where the background passage states that developmental difficulties could usually be treated by using iodized salt, the situation passage describes two villages using different salt, and questions about which village having more/less people experiencing developmental difficulties need to be answered.
\par
\begin{table}[t!]
{\begin{tabular}[c]{|p{0.95\linewidth}|}
\hline
\textbf{Background}
\\
Before \textcolor{orange}{iodized salt} was developed, some people experienced a number of \textcolor{blue}{developmental difficulties}, including problems with thyroid gland function and mental retardation. In the 1920s, we learned that these conditions could usually be treated easily with the addition of iodide anion to the diet. One easy way to increase iodide intake was to add the anion to table salt.
\\
\textbf{Situation}\\
People from two villages ate lots of salt. People from {\color[HTML]{009901} Salt village} used \textcolor{orange}{regular salt}, while people from {\color[HTML]{009901} Sand village} people used \textcolor{orange}{iodized salt} in their diets, after talking to specialists.
\\
\textbf{Q\&A}\\
Q: Which village had more people experience \textcolor{blue}{developmental difficulties}? A: Salt \\
Q: Which village had less people experience \textcolor{blue}{developmental difficulties}? A: Sand \\
\hline
\end{tabular}}
\caption{An example from the ROPES dataset. Effect property tokens are highlighted in \textcolor{blue}{blue}, cause property tokens in \textcolor{orange}{orange}, and world tokens in {\color[HTML]{009901} green.}}
\label{tab:example}
\end{table}
Almost all existing works <|cite_start|> (Reference: Reasoning Over Paragraph Effects in Situations: A key component of successfully reading a passage of text is the ability to apply knowledge gained from the passage to a new situation. In order to facilitate progress on this kind of reading, we present ROPES, a challenging benchmark for reading comprehension targeting Reasoning Over Paragraph Effects in Situations. We target expository language describing causes and effects (e.g., "animal pollinators increase efficiency of fertilization in flowers"), as they have clear implications for new situations. A system is presented a background passage containing at least one of these relations, a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the background passage in the context of the situation. We collect background passages from science textbooks and Wikipedia that contain such phenomena, and ask crowd workers to author situations, questions, and answers, resulting in a 14,322 question dataset. We analyze the challenges of this task and evaluate the performance of state-of-the-art reading comprehension models. The best model performs only slightly better than randomly guessing an answer of the correct type, at 61.6% F1, well below the human performance of 89.0%.) <|cite_end|> <|cite_start|> (Reference: UnifiedQA: Crossing Format Boundaries With a Single QA System: Question answering (QA) tasks have been posed using a variety of formats, such as extractive span selection, multiple choice, etc. This has led to format-specialized models, and even to an implicit division in the QA community. We argue that such boundaries are artificial and perhaps unnecessary, given the reasoning abilities we seek to teach are not governed by the format. As evidence, we use the latest advances in language modeling to build a single pre-trained QA model, UnifiedQA, that performs surprisingly well across 17 QA datasets spanning 4 diverse formats. UnifiedQA performs on par with 9 different models that were trained on individual datasets themselves. Even when faced with 12 unseen datasets of observed formats, UnifiedQA performs surprisingly well, showing strong generalization from its out-of-format training data. Finally, simply fine-tuning this pre-trained QA model into specialized models results in a new state of the art on 6 datasets, establishing UnifiedQA as a strong starting point for building QA systems.) <|cite_end|> <|cite_start|> (Reference: ORB: An Open Reading Benchmark for Comprehensive Evaluation of Machine Reading Comprehension: Reading comprehension is one of the crucial tasks for furthering research in natural language understanding. A lot of diverse reading comprehension datasets have recently been introduced to study various phenomena in natural language, ranging from simple paraphrase matching and entity typing to entity tracking and understanding the implications of the context. Given the availability of many such datasets, comprehensive and reliable evaluation is tedious and time-consuming for researchers working on this problem. We present an evaluation server, ORB, that reports performance on seven diverse reading comprehension datasets, encouraging and facilitating testing a single model's capability in understanding a wide variety of reading phenomena. The evaluation server places no restrictions on how models are trained, so it is a suitable test bed for exploring training paradigms and representation learning for general reading facility. As more suitable datasets are released, they will be added to the evaluation server. We also collect and include synthetic augmentations for these datasets, testing how well models can handle out-of-domain questions.) <|cite_end|> <|cite_start|> (Reference: Evaluating nlp models via contrast sets: Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities. We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model's decision boundary, which can be used to more accurately evaluate a model's true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets---up to 25\% in some cases. We release our contrast sets as new evaluation benchmarks and encourage future dataset construction efforts to follow similar annotation processes.) <|cite_end|> for this task adopt a standard MRC approach based on deep learning in one step: the question and a pseudo passage constructed by concatenating the background and situation are fed into a large pre-trained model (e.g. RoBERTa large), and the answer is predicted directly by the model. However, the ROPES task is more complicated than traditional MRC since it requires a model to not only understand the causes and effects described in a background paragraph, but also apply the knowledge to a novel situation. Ignoring the understanding and reasoning process hinders such models from achieving their best performance. Consequently, the best F1 (61.6\%) achieved so far is far below human performance (89.0\%). More importantly, such a one-step approach makes the reasoning process unexplainable, which is of great importance for complicated reasoning tasks.
We observe that human solve this kind of complicated reasoning tasks in a sequential manner with multiple steps <|cite_start|> (Reference: Heuristic and analytic processes in reasoning: A general two-stage theory of human inference is proposed. A distinction is drawn between heuristic processes which select items of task information as ‘relevant’, and analytic processes which operate on the selected items to generate inferences or judgements.
These two stages are illustrated in a selective review of work on both deductive and statistical reasoning. Factors identified as contributing to heuristic selection include perceptual salience, linguistic suppositions and semantic associations. Analytic processes are considered to be context dependent: people reason from experience, not from inference rules. The paper includes discussion of the theory in comparison with other contemporary theories of human inference, and in relation to the current debate about human rationality.) <|cite_end|> <|cite_start|> (Reference: The empirical case for two systems of reasoning.: Distinctions have been proposed between systems of reasoning for centuries. This article distills properties shared by many of these distinctions and characterizes the resulting systems in light of recent findings and theoretical developments. One system is associative because its computations reflect similarity structure and relations of temporal contiguity. The other is "rule based" because it operates on symbolic structures that have logical content and variables and because its computations have the properties that are normally assigned to rules. The systems serve complementary functions and can simultaneously generate different solutions to a reasoning problem. The rule-based system can suppress the associative system but not completely inhibit it. The article reviews evidence in favor of the distinction and its characterization. One of the oldest conundrums in psychology is whether people are best conceived as parallel processors of information who operate along diffuse associative links or as analysts who operate by deliberate and sequential manipulation of internal representations. Are inferences drawn through a network of learned associative pathways or through application of a kind of"psychologic" that manipulates symbolic tokens in a rule-governed way? The debate has raged (again) in cognitive psychology for almost a decade now. It has pitted those who prefer models of mental phenomena to be built out of networks of associative devices that pass activation around in parallel and distributed form (the way brains probably function) against those who prefer models built out of formal languages in which symbols are composed into sentences that are processed sequentially (the way computers function). An obvious solution to the conundrum is to conceive of the) <|cite_end|> <|cite_start|> (Reference: Differences in the metacognitive awareness of reading strategies among native and non-native readers: ) <|cite_end|> <|cite_start|> (Reference: Assessing students' metacognitive awareness of reading strategies.: This article describes the development and validation of a new self-report instrument, the Metacognitive Awareness of Reading Strategies Inventory, which is designed to assess adolescent and adult readers' metacognitive awareness and perceived use of reading strategies while reading academic or school-related materials. There were 3 strategy subscales or factors: Global Reading Strategies, Problem-Solving Strategies, and Support Reading Strategies. The reliability and factorial validity of the scale were demonstrated. After a brief review of the literature, the development and validation of the instrument are described, and its psychometric properties are discussed. In addition, directions for administering and scoring the instrument are provided, and suggestions for interpreting the results obtained are offered. Finally, the scales' implications for reading research and instruction are discussed.) <|cite_end|> <|cite_start|> (Reference: Measuring ESL Students' Awareness of Reading Strategies: In this article, we describe an instrument, Survey of Reading Strategies (SORS), which is intended to measure adolescent and adult English as a Second Language (ESL) students' metacognitive awareness and perceived use of reading strategies (broadly defined here as mental plans, techniques, and actions taken while reading academic or school-related materials). We further suggest ways of using the instrument as a means of increasing learner awareness of reading strategies, which has been shown to help students improve reading comprehension skills. The development of SORS is our attempt to assist developmental education teachers in helping their ESL students increase metacognitive awareness and become thoughtful, constructively responsive, and strategic readers while reading academic materials-one of the major reasons for their learning of English. Three compelling reasons have motivated us to develop the SORS: First, there is strong research support for the positive relationship between students' metacognitive awareness of reading processes and their ability to read and excel academically (e.g., Alderson, 1984; Carrell, 1991; Clarke, 1979; Cziko, 1978). Second, although there are several instruments aimed at assessing native speakers' metacognitive awareness of reading processes (see Mokhtari & Reichard, 2002, for a brief review of these instruments), we could not find any published instruments that are specifically designed to assess ESL students' metacognitive awareness and perceived use of reading strategies while reading for academic purposes. Third, we have found that, even though there is some agreement among researchers that a number of reading strategies are transferable from one language to another (c.f., Alderson, 1984; Carrell, 1991), the existing instruments do not take into account some of the strategies that are unique to students who are literate in more than one language such as translating from English into one's native language or using both languages when reading to maximize understanding. Consequently, such instruments may not be appropriate for an ESL population. Finally, given the recent and projected increases in cultural and linguistic diversity in schools, colleges, and university classrooms (August & Hakuta, 1997), instructors will be in need of adequate tools for assessing skills and teaching students how to read academic materials efficiently and effectively. Instruments such as SORS should fit within a comprehensive reading assessment for ESL learners at the institution. It should effectively complement many of the traditionally used standard reading assessment tests, such as the Nelson Denney (Brown & Brown, 1993), which simply do not assess students' awareness and control of comprehension processes while reading. SORS is presented as a simple, yet effective tool for enabling students to develop a better awareness of their reading strategies, for helping teachers assess such awareness, and for assisting students in becoming constructively responsive readers. The development of the SORS was initially inspired by the review and use of another instrument Metacognitive Awareness of Reading Strategies Inventory (MARSI), which was developed by Mokhtari and Reichard (2002) as a measure of students' metacognitive awareness of reading strategies. Because MARSI was originally designed for students who are native English speakers, it was inappropriate for use with non-native speakers, which led us to adapt it so that it could be used appropriately for an ESL population. The development of SORS was further inspired by our own experiences teaching language and literacy skills to college-level ESL students. Such experiences are highlighted by a series of observations concerning the mental processes some ESL students go through and the actions they take when reading for academic purposes. Auerbach and Paxton (1997) provide the following illustration of such processes: I used to believe that I have to know all the words in the English readings in order to understand the readings. …) <|cite_end|>. As shown in Table \ref{tab:example}, the background paragraph usually states the relationship between a cause property and an effect property, the situation describes multiple worlds each of which is associated with a specific value in terms of the cause property. Human usually does reasoning in a multi-step process: (1) identifying mentioned worlds, (2) identifying the cause and effect property, (3) understanding the relationship between the cause and effect property, (4) comparing identified worlds in terms of the cause property, and (5) reasoning about the comparison of mentioned worlds in terms of the effect property based on (3) and (4).
Inspired by human cognitive processes, in this paper, we propose a sequential approach that leverages neural network modules to implement each step of the above process\footnote{The code is publicly available at \url{https://github.com/Borororo/interpretable_ropes}.}. Specifically, we define
\begin{itemize}
\item a \textit{World Detection} module to identify potential worlds,
\item an \textit{Effect and Cause Detection} module to identify effect and cause property,
\item a \textit{Relation Classification} module to understand the relationship between effect and cause,
\item a \textit{Comparison} module to compare identified worlds in terms of the cause property, and
\item a \textit{Reasoning} module to infer comparison of mentioned worlds in terms of the effect property.
\end{itemize}
These modules are trained in an end-to-end manner, and auxiliary loss over intermediate latent decisions further boosts the model accuracy.
Explicitly modeling the sequential reasoning process has two advantages. First, it achieves better performance since the complicated reasoning process is decomposed into more manageable sub-tasks and each module only needs to focus on a simple sub-task. Second, intermediate outputs provide a better understanding of the reasoning process, making the learnt model more explainable.
Experimental results on the ROPES dataset demonstrate the effectiveness and explainability of our proposed approach. It surpasses the state-of-the-art model by a large margin (6\% absolute difference) in the five-fold cross-validation setting. Furthermore, analyses on intermediate outputs show that each module in our learnt model performs well on its corresponding sub-task and well explains the reasoning process.
Related Work
Neural network modules have been studied by several works. <|cite_start|> (Reference: Neural Module Networks: Visual question answering is fundamentally compositional in nature---a question like "where is the dog?" shares substructure with questions like "what color is the dog?" and "where is the cat?" This paper seeks to simultaneously exploit the representational capacity of deep networks and the compositional linguistic structure of questions. We describe a procedure for constructing and learning *neural module networks*, which compose collections of jointly-trained neural "modules" into deep networks for question answering. Our approach decomposes questions into their linguistic substructures, and uses these structures to dynamically instantiate modular networks (with reusable components for recognizing dogs, classifying colors, etc.). The resulting compound networks are jointly trained. We evaluate our approach on two challenging datasets for visual question answering, achieving state-of-the-art results on both the VQA natural image dataset and a new dataset of complex questions about abstract shapes.) <|cite_end|> propose neural module networks with a semantic parser on visual question answering. <|cite_start|> (Reference: Self-Assembling Modular Networks for Interpretable Multi-Hop Reasoning: Multi-hop QA requires a model to connect multiple pieces of evidence scattered in a long context to answer the question. The recently proposed HotpotQA (Yang et al., 2018) dataset is comprised of questions embodying four different multi-hop reasoning paradigms (two bridge entity setups, checking multiple properties, and comparing two entities), making it challenging for a single neural network to handle all four. In this work, we present an interpretable, controller-based Self-Assembling Neural Modular Network (Hu et al., 2017, 2018) for multi-hop reasoning, where we design four novel modules (Find, Relocate, Compare, NoOp) to perform unique types of language reasoning. Based on a question, our layout controller RNN dynamically infers a series of reasoning modules to construct the entire network. Empirically, we show that our dynamic, multi-hop modular network achieves significant improvements over the static, single-hop baseline (on both regular and adversarial evaluation). We further demonstrate the interpretability of our model via three analyses. First, the controller can softly decompose the multi-hop question into multiple single-hop sub-questions to promote compositional reasoning behavior of the main network. Second, the controller can predict layouts that conform to the layouts designed by human experts. Finally, the intermediate module can infer the entity that connects two distantly-located supporting facts by addressing the sub-question from the controller.) <|cite_end|> apply a self-assembling modular network with only three modules: Find, Relocate and Compare to Hotpot QA <|cite_start|> (Reference: Transport properties of heterostructures composed of Mo(S,Se)$_2$ on \emph{h}-BN: ) <|cite_end|>. <|cite_start|> (Reference: Neural Module Networks for Reasoning over Text: Answering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations. Neural module networks (NMNs) learn to parse such questions as executable programs composed of learnable modules, performing well on synthetic visual QA domains. However, we find that it is challenging to learn these models for non-synthetic questions on open-domain text, where a model needs to deal with the diversity of natural language and perform a broader range of reasoning. We extend NMNs by: (a) introducing modules that reason over a paragraph of text, performing symbolic reasoning (such as arithmetic, sorting, counting) over numbers and dates in a probabilistic and differentiable manner; and (b) proposing an unsupervised auxiliary loss to help extract arguments associated with the events in text. Additionally, we show that a limited amount of heuristically-obtained question program and intermediate module output supervision provides sufficient inductive bias for accurate learning. Our proposed model significantly outperforms state-of-the-art models on a subset of the DROP dataset that poses a variety of reasoning challenges that are covered by our modules.) <|cite_end|> extend the neural module networks to answer compositional questions against a paragraphs of text as context, and perform symbolic reasoning on the self-pruned subset of DROPS <|cite_start|> (Reference: {DROP: 본 연구는 비만인과 정상인 그룹을 대상으로 drop 착지 동작 시 발생되는 수지 반발력과 운동학적 변인들을(하지 관절 각도, 각속도, 선속도) 분석하여 정상인에 비해 비만인 그룹에서 발생되는 보상행동을 분석하는데 그 목적이 있었다. 비만인 그룹은 정상인 그룹에 비해 drop 착지 동작 시 최대 수직 반발력이 크게 작용하였을 뿐만 아니라 최대 하방 가속도 역시 크게 나타났다. 이에 반해 비만인 그룹은 정상인 그룹에 비해 착지 동작 시 하지 관절 신전/굴곡 각도의 범위가 작았다. 또한 비만인 그룹은 P1에서 하지 관절 각속도와 관절의 선속도에서 정상인 그룹에 비하여 낮은 수치를 나타내었다. 이는 착지로 인해 인체에 전해지는 충격력을 흡수하는 하지 관절 기능성이 정상인 그룹에 비해 비만인 그룹이 낮았기 때문에 비만인 그룹에서는 정상인 그룹에 비해 경성 착지가 발생한 것으로 판단할 수 있다. 이와 같은 근거를 기준으로 비만인 그룹의 경성 착지로 인해 발생되는 보상행동은 다음과 같다. 발이 지면에 닿는 순간에서 무릎 최대 굴곡 순간까지 힙, 무릎관절 각속도를 빠르게 하였다. 그리고 무릎 최대 굴곡 순간에서 무릎 최대 신전 순간까지는 정상인 그룹에 비해 느린 힙, 무릎, 발목 관절각속도를 보였다. 그리고 어깨, 힙, 무릎 관절의 상방향 수직 선속도에서도 비만인 그룹이 정상인 그룹에 비해 느린 수치를 보였다. 그러므로 착지 동작 수행 시 비만인들은 무릎 최대 굴곡 순간 이후에 착지로 인한 운동학적 보상행동 요인들이 발생되는 경향을 본 연구를 통하여 알 수 있었다.) <|cite_end|>. Compared with them, we focus on a more challenging MRC task: reasoning over paragraph effects in situation, which has been rarely investigated and needs more complex reasoning. So far as we know, the only two works (i.e. <|cite_start|> (Reference: Reasoning Over Paragraph Effects in Situations: A key component of successfully reading a passage of text is the ability to apply knowledge gained from the passage to a new situation. In order to facilitate progress on this kind of reading, we present ROPES, a challenging benchmark for reading comprehension targeting Reasoning Over Paragraph Effects in Situations. We target expository language describing causes and effects (e.g., "animal pollinators increase efficiency of fertilization in flowers"), as they have clear implications for new situations. A system is presented a background passage containing at least one of these relations, a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the background passage in the context of the situation. We collect background passages from science textbooks and Wikipedia that contain such phenomena, and ask crowd workers to author situations, questions, and answers, resulting in a 14,322 question dataset. We analyze the challenges of this task and evaluate the performance of state-of-the-art reading comprehension models. The best model performs only slightly better than randomly guessing an answer of the correct type, at 61.6% F1, well below the human performance of 89.0%.) <|cite_end|> and <|cite_start|> (Reference: UnifiedQA: Crossing Format Boundaries With a Single QA System: Question answering (QA) tasks have been posed using a variety of formats, such as extractive span selection, multiple choice, etc. This has led to format-specialized models, and even to an implicit division in the QA community. We argue that such boundaries are artificial and perhaps unnecessary, given the reasoning abilities we seek to teach are not governed by the format. As evidence, we use the latest advances in language modeling to build a single pre-trained QA model, UnifiedQA, that performs surprisingly well across 17 QA datasets spanning 4 diverse formats. UnifiedQA performs on par with 9 different models that were trained on individual datasets themselves. Even when faced with 12 unseen datasets of observed formats, UnifiedQA performs surprisingly well, showing strong generalization from its out-of-format training data. Finally, simply fine-tuning this pre-trained QA model into specialized models results in a new state of the art on 6 datasets, establishing UnifiedQA as a strong starting point for building QA systems.) <|cite_end|>) on this topic uses a one-step ``black box" model. Such an approach performs well on some questions at the expense of limited intepretability. Our work solves this task in a logical manner and exposes intermediate reasoning steps which improves performance and interpretability concurrently.\\
\begin{figure*}[ht]
\centering
\includegraphics[height=0.29\textheight,width=\textwidth]{world_all_v2.png}
\caption{The left part is the architecture of our model. The middle part is the interpretable reasoning component in our model. The right part is the summary for inputs and outputs flowing between each module. The encoded contextual representations, $\bm{H^q,H^s,H^b}$, serve as global variables for the interpretable reasoning component.}
\label{fig:IR}
\end{figure*} <|paper_end|> | [
"<|reference_start|> Evaluating nlp models via contrast sets: Standard test sets for supervised learning evaluate in-distribution generalization. Unfortunately, when a dataset has systematic gaps (e.g., annotation artifacts), these evaluations are misleading: a model can learn simple decision rules that perform well on the test set but do not capture a dataset's intended capabilities. We propose a new annotation paradigm for NLP that helps to close systematic gaps in the test data. In particular, after a dataset is constructed, we recommend that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets. Contrast sets provide a local view of a model's decision boundary, which can be used to more accurately evaluate a model's true linguistic capabilities. We demonstrate the efficacy of contrast sets by creating them for 10 diverse NLP datasets (e.g., DROP reading comprehension, UD parsing, IMDb sentiment analysis). Although our contrast sets are not explicitly adversarial, model performance is significantly lower on them than on the original test sets---up to 25\\% in some cases. We release our contrast sets as new evaluation benchmarks and encourage future dataset construction efforts to follow similar annotation processes. <|reference_end|>",
"<|reference_start|> Heuristic and analytic processes in reasoning: A general two-stage theory of human inference is proposed. A distinction is drawn between heuristic processes which select items of task information as ‘relevant’, and analytic processes which operate on the selected items to generate inferences or judgements. \n \nThese two stages are illustrated in a selective review of work on both deductive and statistical reasoning. Factors identified as contributing to heuristic selection include perceptual salience, linguistic suppositions and semantic associations. Analytic processes are considered to be context dependent: people reason from experience, not from inference rules. The paper includes discussion of the theory in comparison with other contemporary theories of human inference, and in relation to the current debate about human rationality. <|reference_end|>",
"<|reference_start|> Neural Module Networks for Reasoning over Text: Answering compositional questions that require multiple steps of reasoning against text is challenging, especially when they involve discrete, symbolic operations. Neural module networks (NMNs) learn to parse such questions as executable programs composed of learnable modules, performing well on synthetic visual QA domains. However, we find that it is challenging to learn these models for non-synthetic questions on open-domain text, where a model needs to deal with the diversity of natural language and perform a broader range of reasoning. We extend NMNs by: (a) introducing modules that reason over a paragraph of text, performing symbolic reasoning (such as arithmetic, sorting, counting) over numbers and dates in a probabilistic and differentiable manner; and (b) proposing an unsupervised auxiliary loss to help extract arguments associated with the events in text. Additionally, we show that a limited amount of heuristically-obtained question program and intermediate module output supervision provides sufficient inductive bias for accurate learning. Our proposed model significantly outperforms state-of-the-art models on a subset of the DROP dataset that poses a variety of reasoning challenges that are covered by our modules. <|reference_end|>",
"<|reference_start|> Reasoning Over Paragraph Effects in Situations: A key component of successfully reading a passage of text is the ability to apply knowledge gained from the passage to a new situation. In order to facilitate progress on this kind of reading, we present ROPES, a challenging benchmark for reading comprehension targeting Reasoning Over Paragraph Effects in Situations. We target expository language describing causes and effects (e.g., \"animal pollinators increase efficiency of fertilization in flowers\"), as they have clear implications for new situations. A system is presented a background passage containing at least one of these relations, a novel situation that uses this background, and questions that require reasoning about effects of the relationships in the background passage in the context of the situation. We collect background passages from science textbooks and Wikipedia that contain such phenomena, and ask crowd workers to author situations, questions, and answers, resulting in a 14,322 question dataset. We analyze the challenges of this task and evaluate the performance of state-of-the-art reading comprehension models. The best model performs only slightly better than randomly guessing an answer of the correct type, at 61.6% F1, well below the human performance of 89.0%. <|reference_end|>"
] | [
8,
9,
17,
19
] | {"<|multi_cite_3_1|>": "ss-1832266", "<|multi_cite_3_2|>": "ss-1234837", "<|multi_cite_3_3|>": "ss-963504", "<|multi_cite_3_4|>": "ss-888331", "<|cite_4|>": "arxiv-218950", "<|multi_cite_1_1|>": "arxiv-218950", "<|multi_cite_1_2|>": "arxiv-263002", "<|multi_cite_1_3|>": "arxiv-241267", "<|multi_cite_1_4|>": "ss-1957912", "<|multi_cite_2_1|>": "ss-1961178", "<|multi_cite_2_2|>": "ss-2082816", "<|multi_cite_2_3|>": "ss-1961179", "<|multi_cite_2_4|>": "ss-1132167", "<|multi_cite_2_5|>": "ss-1132166", "<|cite_9|>": "arxiv-86843", "<|cite_10|>": "arxiv-223405", "<|cite_5|>": "ss-888331", "<|cite_11|>": "arxiv-238719", "<|cite_6|>": "ss-1234837", "<|cite_7|>": "arxiv-218950", "<|cite_8|>": "arxiv-263002"} |
1510.07394 | <|paper_start|> Title: Capacity of the Gaussian Two-Hop Full-Duplex Relay Channel with Residual Self-Interference
Abstract: Capacity of the Gaussian Two-Hop Full-Duplex Relay Channel with Residual Self-Interference: In this paper, we investigate the capacity of the Gaussian two-hop full-duplex (FD) relay channel with residual self-interference. This channel is comprised of a source, an FD relay, and a destination, where a direct source-destination link does not exist and the FD relay is impaired by residual self-interference. We adopt the worst-case linear self-interference model with respect to the channel capacity, and model the residual self-interference as a Gaussian random variable whose variance depends on the amplitude of the transmit symbol of the relay. For this channel, we derive the capacity and propose an explicit capacity-achieving coding scheme. Thereby, we show that the optimal input distribution at the source is Gaussian and its variance depends on the amplitude of the transmit symbol of the relay. On the other hand, the optimal input distribution at the relay is discrete or Gaussian, where the latter case occurs only when the relay-destination link is the bottleneck link. The derived capacity converges to the capacity of the two-hop ideal FD relay channel without self-interference and to the capacity of the two-hop half-duplex (HD) relay channel in the limiting cases when the residual self-interference is zero and infinite, respectively. Our numerical results show that significant performance gains are achieved with the proposed capacity-achieving coding scheme compared to the achievable rates of conventional HD relaying and/or conventional FD relaying.
Introduction
In wireless communications, relays are employed in order to increase the data rate between a source and a destination. The resulting three-node channel is known as the relay channel <|cite_start|> (Reference: Capacity theorems for the relay channel: A relay channel consists of an input x_{l} , a relay output y_{1} , a channel output y , and a relay sender x_{2} (whose transmission is allowed to depend on the past symbols y_{1} . The dependence of the received symbols upon the inputs is given by p(y,y_{1}|x_{1},x_{2}) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_{1} , then C \: = \: \max \!_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y), I(X_{1}; Y_{1}|X_{2})} . 2)If y_{1} is a degraded form of y , then C \: = \: \max \!_{p(x_{1})} \max_{x_{2}} I(X_{1};Y|x_{2}) . 3)If p(y,y_{1}|x_{1},x_{2}) is an arbitrary relay channel with feedback from (y,y_{1}) to both x_{1} \and x_{2} , then C\: = \: \max_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y),I \,(X_{1};Y,Y_{1}|X_{2})} . 4)For a general relay channel, C \: \leq \: \max_{p(x_{1},x_{2})} \min \,{I \,(X_{1}, X_{2};Y),I(X_{1};Y,Y_{1}|X_{2}) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established.) <|cite_end|>. If the distance between the source and the destination is very large or there is heavy blockage, then the relay channel can be modeled without a source-destination link, which leads to the so called two-hop relay channel. For the relay channel, there are two different modes of operation for the relay, namely, the full-duplex (FD) mode and the half-duplex (HD) mode. In the FD mode, the relay transmits and receives at the same time and in the same frequency band. As a result, FD relays are impaired by self-interference, which is the interference caused by the relay's transmit signal to the relay's received signal. Latest advances in hardware design have shown that the self-interference of an FD node can be suppressed significantly, see
\nocite{5089955, Choi:2010, 5961159, 5985554, Jain_2011, 6177689, 6280258, 6353396, Bharadia:2013:FDR:2486001.2486033, 6542771, 6523998, 6702851,6736751, 6656015, 6782415, 6832592, 6862895, 6832471, 6832464, 6832439, 7105647, 7024120, 7051286, 7390828, 6736751, 7182305}- <|cite_start|> (Reference: Throughput Analysis for Full-Duplex Wireless Networks with Imperfect Self-interference Cancellation: This paper investigates the throughput for wireless network with full-duplex radios using stochastic geometry. Full-duplex (FD) radios can exchange data simultaneously with each other. On the other hand, the downside of FD transmission is that it will inevitably cause extra interference to the network compared to half-duplex (HD) transmission. Moreover, the residual self-interference has negative effects on the network throughput. In this paper, we focus on a wireless network of nodes with both HD and FD capabilities and derive and optimize the throughput in such a network. Our analytical result shows that if the network is adapting an ALOHA protocol, the maximal throughput is achieved by scheduling all concurrently transmitting nodes to work in either FD mode or HD mode depending on one simple condition. Moreover, the effects of imperfect self-interference cancellation on the signal-to-interference ratio (SIR) loss and throughput are also analyzed based on our mathematical model. We rigorously quantify the impact of imperfect self-interference cancellation on the throughput gain, transmission range, and other metrics, and we establish the minimum amount of self-interference suppression needed for FD to be beneficial.) <|cite_end|>, which has led to an enormous interest in FD communication. For example, <|cite_start|> (Reference: {Full Duplex Radios: This chapter presents the design and implementation of the first in‐band full‐duplex WiFi radios that can simultaneously transmit and receive on the same channel using standard WiFi 802.11 ac PHYs and achieves close to the theoretical doubling of throughput in all practical deployment scenarios. Our design uses a single antenna for simultaneous TX/RX (the same resources as a standard half‐duplex system). We also propose novel analog and digital cancellation techniques that remove the self‐interference to the receiver noise floor, and therefore ensure that there is no degradation to the received signal. We prototype our design by building our own analog circuit boards and integrating them with a fully WiFi–PHY compatible software radio implementation. We show experimentally that our design works robustly in noisy indoor environments, and provides close to the expected theoretical doubling of throughput in practice.) <|cite_end|> reported that self-interference suppression of 110 dB is possible in certain scenarios.
On the other hand, in the HD mode, the relay transmits and receives in the same frequency band but in different time slots or in the same time slot but in different frequency bands. As a result, HD relays completely avoid self-interference. However, since an HD relay transmits and receives only in half of the time/frequency resources compared to an FD relay, the achievable rate of the two-hop HD relay channel may be significantly lower than that of the two-hop FD relay channel.
Information-theoretic analyses of the capacity of the two-hop HD relay channel were provided in <|cite_start|> (Reference: Models and Theory for Relay Channels with Receive Constraints: Relay channels where terminals cannot receive and transmit at the same time are modeled as being memoryless with cost constraints. Cost functions are considered that measure the power consumed in each of three sleep-listen-or-talk (SLoT) modes, as well as the fraction of time the modes are used. It is shown that strategies that have the SLoT modes known ahead of time by all terminals are generally suboptimal. It is further shown that Gaussian input distributions are generally suboptimal for Gaussian channels. For several types of models and SLoT constraints, it is shown that multi-hopping (or decode-andforward) achieves the information-theoretic capacity if the relay is geometrically near the source terminal, and if the fraction of time the relay listens to the source is lower bounded by a positive number. SLoT constraints for which the capacity claim might not be valid are discussed. Finally, it is pointed out that a lack of symbol synchronization between the relays has little or no effect on the capacity theorems if the signals are bandlimited and if independent input signals are optimal.) <|cite_end|>, <|cite_start|> (Reference: On the Capacity of the Two-Hop Half-Duplex Relay Channel: Although extensively investigated, the capacity of the two-hop half-duplex (HD) relay channel is not fully understood. In particular, a capacity expression which can be easily evaluated is not available and an explicit coding scheme which achieves the capacity is not known either. In this paper, we derive a new expression for the capacity of the two-hop HD relay channel by simplifying previously derived converse expressions. Compared to previous results, the new capacity expression can be easily evaluated. Moreover, we propose an explicit coding scheme which achieves the capacity. To achieve the capacity, the relay does not only send information to the destination by transmitting information-carrying symbols but also with the zero symbols resulting from the relay's silence during reception. As examples, we compute the capacities of the two-hop HD relay channel for the cases when the source-relay and relay-destination links are both binary-symmetric channels (BSCs) and additive white Gaussian noise (AWGN) channels, respectively, and numerically compare the capacities with the rates achieved by conventional relaying where the relay receives and transmits in a codeword-by-codeword fashion and switches between reception and transmission in a strictly alternating manner. Our numerical results show that the capacities of the two-hop HD relay channel for BSC and AWGN links are significantly larger than the rates achieved with conventional relaying.) <|cite_end|>. Thereby, it was shown that the capacity of the two-hop HD relay channel is achieved when the HD relay switches between reception and transmission in a symbol-by-symbol manner and not in a codeword-by-codeword manner, as is done in conventional HD relaying <|cite_start|> (Reference: Capacity Bounds and Power Allocation for Wireless Relay Channels: We consider three-node wireless relay channels in a Rayleigh-fading environment. Assuming transmitter channel state information (CSI), we study upper bounds and lower bounds on the outage capacity and the ergodic capacity. Our studies take into account practical constraints on the transmission/reception duplexing at the relay node and on the synchronization between the source node and the relay node. We also explore power allocation. Compared to the direct transmission and traditional multihop protocols, our results reveal that optimum relay channel signaling can significantly outperform multihop protocols, and that power allocation has a significant impact on the performance.) <|cite_end|>. Moreover,
in order to achieve the capacity, the HD relay has to encode information into the silent symbol created when the relay receives <|cite_start|> (Reference: On the Capacity of the Two-Hop Half-Duplex Relay Channel: Although extensively investigated, the capacity of the two-hop half-duplex (HD) relay channel is not fully understood. In particular, a capacity expression which can be easily evaluated is not available and an explicit coding scheme which achieves the capacity is not known either. In this paper, we derive a new expression for the capacity of the two-hop HD relay channel by simplifying previously derived converse expressions. Compared to previous results, the new capacity expression can be easily evaluated. Moreover, we propose an explicit coding scheme which achieves the capacity. To achieve the capacity, the relay does not only send information to the destination by transmitting information-carrying symbols but also with the zero symbols resulting from the relay's silence during reception. As examples, we compute the capacities of the two-hop HD relay channel for the cases when the source-relay and relay-destination links are both binary-symmetric channels (BSCs) and additive white Gaussian noise (AWGN) channels, respectively, and numerically compare the capacities with the rates achieved by conventional relaying where the relay receives and transmits in a codeword-by-codeword fashion and switches between reception and transmission in a strictly alternating manner. Our numerical results show that the capacities of the two-hop HD relay channel for BSC and AWGN links are significantly larger than the rates achieved with conventional relaying.) <|cite_end|>.
For the Gaussian two-hop HD relay channel without fading, it was shown in <|cite_start|> (Reference: On the Capacity of the Two-Hop Half-Duplex Relay Channel: Although extensively investigated, the capacity of the two-hop half-duplex (HD) relay channel is not fully understood. In particular, a capacity expression which can be easily evaluated is not available and an explicit coding scheme which achieves the capacity is not known either. In this paper, we derive a new expression for the capacity of the two-hop HD relay channel by simplifying previously derived converse expressions. Compared to previous results, the new capacity expression can be easily evaluated. Moreover, we propose an explicit coding scheme which achieves the capacity. To achieve the capacity, the relay does not only send information to the destination by transmitting information-carrying symbols but also with the zero symbols resulting from the relay's silence during reception. As examples, we compute the capacities of the two-hop HD relay channel for the cases when the source-relay and relay-destination links are both binary-symmetric channels (BSCs) and additive white Gaussian noise (AWGN) channels, respectively, and numerically compare the capacities with the rates achieved by conventional relaying where the relay receives and transmits in a codeword-by-codeword fashion and switches between reception and transmission in a strictly alternating manner. Our numerical results show that the capacities of the two-hop HD relay channel for BSC and AWGN links are significantly larger than the rates achieved with conventional relaying.) <|cite_end|> that the optimal input distribution at the relay is discrete and includes the zero (i.e., silent) symbol. On the other hand, the source transmits using a Gaussian input distribution when the relay transmits the zero (i.e., silent) symbol and is silent otherwise.
The capacity of the Gaussian two-hop FD relay channel with ideal FD relaying without residual self-interference was derived in <|cite_start|> (Reference: Capacity theorems for the relay channel: A relay channel consists of an input x_{l} , a relay output y_{1} , a channel output y , and a relay sender x_{2} (whose transmission is allowed to depend on the past symbols y_{1} . The dependence of the received symbols upon the inputs is given by p(y,y_{1}|x_{1},x_{2}) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_{1} , then C \: = \: \max \!_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y), I(X_{1}; Y_{1}|X_{2})} . 2)If y_{1} is a degraded form of y , then C \: = \: \max \!_{p(x_{1})} \max_{x_{2}} I(X_{1};Y|x_{2}) . 3)If p(y,y_{1}|x_{1},x_{2}) is an arbitrary relay channel with feedback from (y,y_{1}) to both x_{1} \and x_{2} , then C\: = \: \max_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y),I \,(X_{1};Y,Y_{1}|X_{2})} . 4)For a general relay channel, C \: \leq \: \max_{p(x_{1},x_{2})} \min \,{I \,(X_{1}, X_{2};Y),I(X_{1};Y,Y_{1}|X_{2}) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established.) <|cite_end|>. However, in practice, canceling the residual self-interference completely is not possible due to limitations in channel estimation precision and imperfections in the transceiver design <|cite_start|> (Reference: In-Band Full-Duplex Wireless: Challenges and Opportunities: In-band full-duplex (IBFD) operation has emerged as an attractive solution for increasing the throughput of wireless communication systems and networks. With IBFD, a wireless terminal is allowed to transmit and receive simultaneously in the same frequency band. This tutorial paper reviews the main concepts of IBFD wireless. Because one the biggest practical impediments to IBFD operation is the presence of self-interference, i.e., the interference caused by an IBFD node's own transmissions to its desired receptions, this tutorial surveys a wide range of IBFD self-interference mitigation techniques. Also discussed are numerous other research challenges and opportunities in the design and analysis of IBFD wireless systems.) <|cite_end|>.
As a result, the residual self-interference has to be taken into account when investigating the capacity of the two-hop FD relay channel.
Despite the considerable body of work on FD relaying, see e.g. <|cite_start|> (Reference: Hybrid full-duplex/half-duplex relaying with transmit power adaptation: Focusing on two-antenna infrastructure relays employed for coverage extension, we develop hybrid techniques that switch opportunistically between full-duplex and half-duplex relaying modes. To rationalize the system design, the classic three-node full-duplex relay link is first amended by explicitly modeling residual relay self-interference, i.e., a loopback signal from the transmit antenna to the receive antenna remaining after cancellation. The motivation for opportunistic mode selection stems then from the fundamental trade-off determining the spectral efficiency: The half-duplex mode avoids inherently the self-interference at the cost of halving the end-to-end symbol rate while the full-duplex mode achieves full symbol rate but, in practice, suffers from residual interference even after cancellation. We propose the combination of opportunistic mode selection and transmit power adaptation for maximizing instantaneous and average spectral efficiency after noting that the trade-off favors alternately the modes during operation. The analysis covers both common relaying protocols (amplify-and-forward and decode-and-forward) as well as reflects the difference of downlink and uplink systems. The results show that opportunistic mode selection offers significant performance gain over system design that is confined to either mode without rationalization.) <|cite_end|> <|cite_start|> (Reference: Mitigation of loopback self-Interference in full-duplex {MIMO} relays: Full-duplex relaying is more spectrally efficient than half-duplex relaying as only one channel use is needed per two hops. However, it is crucial to minimize relay self-interference to render full duplex feasible. For this purpose, we analyze a broad range of multiple-input multiple-output (MIMO) mitigation schemes: natural isolation, time-domain cancellation, and spatial suppression. Cancellation subtracts replicated interference signal from the relay input while suppression reserves spatial dimensions for receive and transmit filtering. Spatial suppression can be achieved by antenna subset selection, null-space projection, i.e., receiving and transmitting in orthogonal subspaces, or joint transmit and receive beam selection to support more spatial streams by choosing the minimum eigenmodes for overlapping subspaces. In addition, minimum mean square error (MMSE) filtering can be employed to maintain the desired signal quality, which is inherent for cancellation, and the combination of time- and spatial-domain processing may be better than either alone. Targeting at minimal interference power, we solve optimal filters for each scheme in the cases of joint, separate and independent design. The performance of mitigation schemes is evaluated and compared by simulations. The results confirm that self-interference can be mitigated effectively also in the presence of imperfect side information.) <|cite_end|> <|cite_start|> (Reference: Full-Duplex MIMO Relaying: Achievable Rates under Limited Dynamic Range: In this paper we consider the problem of full-duplex multiple-input multiple-output (MIMO) relaying between multi-antenna source and destination nodes. The principal difficulty in implementing such a system is that, due to the limited attenuation between the relay's transmit and receive antenna arrays, the relay's outgoing signal may overwhelm its limited-dynamic-range input circuitry, making it difficult---if not impossible---to recover the desired incoming signal. While explicitly modeling transmitter/receiver dynamic-range limitations and channel estimation error, we derive tight upper and lower bounds on the end-to-end achievable rate of decode-and-forward-based full-duplex MIMO relay systems, and propose a transmission scheme based on maximization of the lower bound. The maximization requires us to (numerically) solve a nonconvex optimization problem, for which we detail a novel approach based on bisection search and gradient projection. To gain insights into system design tradeoffs, we also derive an analytic approximation to the achievable rate and numerically demonstrate its accuracy. We then study the behavior of the achievable rate as a function of signal-to-noise ratio, interference-to-noise ratio, transmitter/receiver dynamic range, number of antennas, and training length, using optimized half-duplex signaling as a baseline.) <|cite_end|> <|cite_start|> (Reference: An optimal full-duplex af relay for joint analog and digital domain self-interference cancellation: In this paper, a full-duplex (FD) amplify-and-forward (AF) relay is designed to compensate for the duplexing loss of the half-duplex (HD) AF relay. In particular, when there is no direct link between a source and a destination, joint analog domain self-interference suppression and digital domain residual self-interference cancellation is considered with an FD-AF relay having single receive antenna but multiple transmit antennas. Unlike previous approaches, a nonconvex quadratically constrained quadratic programming problem is formulated to find the optimal solution. The end-to-end spectral efficiency or, equivalently, the end-to-end signal-to-interference-plus-noise ratio from the source to the destination is chosen as the objective function to be maximized subject to the average transmit power constraint at the relay. In addition, an average power constraint is imposed on the output of the relay's receive antenna to avoid the nonlinear distortion in the low noise amplifier and the excessive quantization noise in the analog-to-digital converter. Through the systematic reduction and the partitioning of the constraint set, the optimal solution is derived in a closed algorithmic expression and shows how it allocates the transmission power not only in the direction of maximal performance improvement but also in the orthogonal direction in order to balance the system performance and the amount of self interference. It is shown that the optimal FD-AF relay significantly outperforms the optimal HD-AF relay even with the hardware limitations in the RF chain of the relay's receiver being well taken into account.) <|cite_end|> <|cite_start|> (Reference: Achievable transmission rates and self-interference channel estimation in hybrid full-duplex/half-duplex mimo relaying: This paper investigates the achievable throughput of a multi-antenna two-hop relay link under hybrid full/half-duplex operation. The analysis is facilitated by realistic waveform simulations, which explicitly model all the essential circuit impairments occurring in the relay transceiver together with degrading channel estimation and self-interference cancellation. The obtained results indicate that pure full-duplex operation does not ensure optimal performance but additional half-duplex transmission periods are usually needed to maximize the end-to-end throughput. Especially, it is shown that the estimation of the self-interference channel within the relay should be performed when the source is not transmitting anything while also the source should be allowed to transmit alone to avoid making the first hop a bottleneck. These findings form a solid basis for optimizing the full-duplex MIMO relay deployments in future mobile networks.) <|cite_end|>, the capacity of the two-hop FD relay channel with residual self-interference has not been explicitly characterized yet. As a result, for this channel, only achievable rates are known which are strictly smaller than the capacity.
Therefore, in this paper, we study the capacity of the two-hop FD relay channel with residual self-interference for the case when the source-relay and relay-destination links are additive white Gaussian noise (AWGN) channels.
In general, the statistics of the residual self-interference depend on the employed hardware configuration and the adopted self-interference suppression schemes. As a result, different hardware configurations and different self-interference suppression schemes may lead to different statistical properties of the residual self-interference, and thereby, to different capacities for the considered relay channel. An upper bound on the capacity of the two-hop FD relay channel with residual self-interference is given in in <|cite_start|> (Reference: Capacity theorems for the relay channel: A relay channel consists of an input x_{l} , a relay output y_{1} , a channel output y , and a relay sender x_{2} (whose transmission is allowed to depend on the past symbols y_{1} . The dependence of the received symbols upon the inputs is given by p(y,y_{1}|x_{1},x_{2}) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_{1} , then C \: = \: \max \!_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y), I(X_{1}; Y_{1}|X_{2})} . 2)If y_{1} is a degraded form of y , then C \: = \: \max \!_{p(x_{1})} \max_{x_{2}} I(X_{1};Y|x_{2}) . 3)If p(y,y_{1}|x_{1},x_{2}) is an arbitrary relay channel with feedback from (y,y_{1}) to both x_{1} \and x_{2} , then C\: = \: \max_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y),I \,(X_{1};Y,Y_{1}|X_{2})} . 4)For a general relay channel, C \: \leq \: \max_{p(x_{1},x_{2})} \min \,{I \,(X_{1}, X_{2};Y),I(X_{1};Y,Y_{1}|X_{2}) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established.) <|cite_end|> and is obtained by assuming zero residual self-interference. Hence, the objective of this paper is to derive
a lower bound on the capacity of this channel valid for any linear residual self-interference model. To this end, we consider the worst-case linear self-interference model with respect to the capacity, and thereby, we obtain the desired lower bound on the capacity for any other type of linear residual self-interference.
For the worst-case, the linear residual self-interference is modeled as a conditionally Gaussian distributed random variable (RV) whose variance depends on the amplitude of the symbol transmitted by the relay.
For this relay channel, we
derive the corresponding capacity and propose an explicit coding scheme which achieves the capacity. We show that the FD relay has to operate in the decode-and-forward (DF) mode to achieve the capacity, i.e., it has to decode each codeword received from the source and then transmit the decoded information to the destination in the next time slot, while simultaneously receiving. Moreover, we show that the optimal input distribution at the relay is
discrete or Gaussian, where the latter case occurs only when the relay-destination link is the
bottleneck link. On the other hand, the capacity-achieving input distribution at the source is Gaussian and its variance
depends on the amplitude of the symbol transmitted by the relay, i.e., the average power of the source's transmit symbol depends on the amplitude of the relay's transmit symbol. In particular, the smaller the amplitude of
the relay's transmit symbol is, the higher the average power of the source's transmit symbol
should be since, in that case, the residual self-interference is small with high probability. On the other hand, if the amplitude of the relay's transmit symbol is very large and exceeds some threshold, the chance for very strong residual self-interference is high and the source should remain silent
and conserve its energy for other symbol intervals with weaker residual self-interference. We show that the derived capacity converges to the capacity of the two-hop ideal FD relay channel without self-interference <|cite_start|> (Reference: Capacity theorems for the relay channel: A relay channel consists of an input x_{l} , a relay output y_{1} , a channel output y , and a relay sender x_{2} (whose transmission is allowed to depend on the past symbols y_{1} . The dependence of the received symbols upon the inputs is given by p(y,y_{1}|x_{1},x_{2}) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_{1} , then C \: = \: \max \!_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y), I(X_{1}; Y_{1}|X_{2})} . 2)If y_{1} is a degraded form of y , then C \: = \: \max \!_{p(x_{1})} \max_{x_{2}} I(X_{1};Y|x_{2}) . 3)If p(y,y_{1}|x_{1},x_{2}) is an arbitrary relay channel with feedback from (y,y_{1}) to both x_{1} \and x_{2} , then C\: = \: \max_{p(x_{1},x_{2})} \min \,{I(X_{1},X_{2};Y),I \,(X_{1};Y,Y_{1}|X_{2})} . 4)For a general relay channel, C \: \leq \: \max_{p(x_{1},x_{2})} \min \,{I \,(X_{1}, X_{2};Y),I(X_{1};Y,Y_{1}|X_{2}) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established.) <|cite_end|> and to the capacity of the two-hop HD relay channel <|cite_start|> (Reference: On the Capacity of the Two-Hop Half-Duplex Relay Channel: Although extensively investigated, the capacity of the two-hop half-duplex (HD) relay channel is not fully understood. In particular, a capacity expression which can be easily evaluated is not available and an explicit coding scheme which achieves the capacity is not known either. In this paper, we derive a new expression for the capacity of the two-hop HD relay channel by simplifying previously derived converse expressions. Compared to previous results, the new capacity expression can be easily evaluated. Moreover, we propose an explicit coding scheme which achieves the capacity. To achieve the capacity, the relay does not only send information to the destination by transmitting information-carrying symbols but also with the zero symbols resulting from the relay's silence during reception. As examples, we compute the capacities of the two-hop HD relay channel for the cases when the source-relay and relay-destination links are both binary-symmetric channels (BSCs) and additive white Gaussian noise (AWGN) channels, respectively, and numerically compare the capacities with the rates achieved by conventional relaying where the relay receives and transmits in a codeword-by-codeword fashion and switches between reception and transmission in a strictly alternating manner. Our numerical results show that the capacities of the two-hop HD relay channel for BSC and AWGN links are significantly larger than the rates achieved with conventional relaying.) <|cite_end|> in the limiting cases when the residual self-interference is zero and infinite, respectively. Our numerical results reveal that significant performance gains are achieved with the proposed capacity-achieving coding scheme compared to the achievable rates of conventional HD relaying and/or conventional FD relaying.
This paper is organized as follows. In Section~\ref{Sec2}, we present the models for the channel and the residual self-interference. In Section~\ref{Sec3}, we present the capacity of the considered channel and propose an explicit capacity-achieving coding scheme. Numerical examples are provided in Section~\ref{Sec-Num}, and Section~\ref{con} concludes the paper. <|paper_end|> | [
"<|reference_start|> Throughput Analysis for Full-Duplex Wireless Networks with Imperfect Self-interference Cancellation: This paper investigates the throughput for wireless network with full-duplex radios using stochastic geometry. Full-duplex (FD) radios can exchange data simultaneously with each other. On the other hand, the downside of FD transmission is that it will inevitably cause extra interference to the network compared to half-duplex (HD) transmission. Moreover, the residual self-interference has negative effects on the network throughput. In this paper, we focus on a wireless network of nodes with both HD and FD capabilities and derive and optimize the throughput in such a network. Our analytical result shows that if the network is adapting an ALOHA protocol, the maximal throughput is achieved by scheduling all concurrently transmitting nodes to work in either FD mode or HD mode depending on one simple condition. Moreover, the effects of imperfect self-interference cancellation on the signal-to-interference ratio (SIR) loss and throughput are also analyzed based on our mathematical model. We rigorously quantify the impact of imperfect self-interference cancellation on the throughput gain, transmission range, and other metrics, and we establish the minimum amount of self-interference suppression needed for FD to be beneficial. <|reference_end|>",
"<|reference_start|> On the Capacity of the Two-Hop Half-Duplex Relay Channel: Although extensively investigated, the capacity of the two-hop half-duplex (HD) relay channel is not fully understood. In particular, a capacity expression which can be easily evaluated is not available and an explicit coding scheme which achieves the capacity is not known either. In this paper, we derive a new expression for the capacity of the two-hop HD relay channel by simplifying previously derived converse expressions. Compared to previous results, the new capacity expression can be easily evaluated. Moreover, we propose an explicit coding scheme which achieves the capacity. To achieve the capacity, the relay does not only send information to the destination by transmitting information-carrying symbols but also with the zero symbols resulting from the relay's silence during reception. As examples, we compute the capacities of the two-hop HD relay channel for the cases when the source-relay and relay-destination links are both binary-symmetric channels (BSCs) and additive white Gaussian noise (AWGN) channels, respectively, and numerically compare the capacities with the rates achieved by conventional relaying where the relay receives and transmits in a codeword-by-codeword fashion and switches between reception and transmission in a strictly alternating manner. Our numerical results show that the capacities of the two-hop HD relay channel for BSC and AWGN links are significantly larger than the rates achieved with conventional relaying. <|reference_end|>",
"<|reference_start|> Capacity theorems for the relay channel: A relay channel consists of an input x_{l} , a relay output y_{1} , a channel output y , and a relay sender x_{2} (whose transmission is allowed to depend on the past symbols y_{1} . The dependence of the received symbols upon the inputs is given by p(y,y_{1}|x_{1},x_{2}) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_{1} , then C \\: = \\: \\max \\!_{p(x_{1},x_{2})} \\min \\,{I(X_{1},X_{2};Y), I(X_{1}; Y_{1}|X_{2})} . 2)If y_{1} is a degraded form of y , then C \\: = \\: \\max \\!_{p(x_{1})} \\max_{x_{2}} I(X_{1};Y|x_{2}) . 3)If p(y,y_{1}|x_{1},x_{2}) is an arbitrary relay channel with feedback from (y,y_{1}) to both x_{1} \\and x_{2} , then C\\: = \\: \\max_{p(x_{1},x_{2})} \\min \\,{I(X_{1},X_{2};Y),I \\,(X_{1};Y,Y_{1}|X_{2})} . 4)For a general relay channel, C \\: \\leq \\: \\max_{p(x_{1},x_{2})} \\min \\,{I \\,(X_{1}, X_{2};Y),I(X_{1};Y,Y_{1}|X_{2}) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established. <|reference_end|>",
"<|reference_start|> Capacity theorems for the relay channel: A relay channel consists of an input x_{l} , a relay output y_{1} , a channel output y , and a relay sender x_{2} (whose transmission is allowed to depend on the past symbols y_{1} . The dependence of the received symbols upon the inputs is given by p(y,y_{1}|x_{1},x_{2}) . The channel is assumed to be memoryless. In this paper the following capacity theorems are proved. 1)If y is a degraded form of y_{1} , then C \\: = \\: \\max \\!_{p(x_{1},x_{2})} \\min \\,{I(X_{1},X_{2};Y), I(X_{1}; Y_{1}|X_{2})} . 2)If y_{1} is a degraded form of y , then C \\: = \\: \\max \\!_{p(x_{1})} \\max_{x_{2}} I(X_{1};Y|x_{2}) . 3)If p(y,y_{1}|x_{1},x_{2}) is an arbitrary relay channel with feedback from (y,y_{1}) to both x_{1} \\and x_{2} , then C\\: = \\: \\max_{p(x_{1},x_{2})} \\min \\,{I(X_{1},X_{2};Y),I \\,(X_{1};Y,Y_{1}|X_{2})} . 4)For a general relay channel, C \\: \\leq \\: \\max_{p(x_{1},x_{2})} \\min \\,{I \\,(X_{1}, X_{2};Y),I(X_{1};Y,Y_{1}|X_{2}) . Superposition block Markov encoding is used to show achievability of C , and converses are established. The capacities of the Gaussian relay channel and certain discrete relay channels are evaluated. Finally, an achievable lower bound to the capacity of the general relay channel is established. <|reference_end|>"
] | [
1,
6,
15,
16
] | {"<|cite_1|>": "ss-1350666", "<|cite_3|>": "arxiv-73658", "<|cite_4|>": "ss-1589170", "<|cite_5|>": "ss-1432607", "<|cite_6|>": "arxiv-69018", "<|cite_7|>": "ss-1688093", "<|cite_8|>": "arxiv-69018", "<|cite_9|>": "arxiv-69018", "<|cite_10|>": "ss-1350666", "<|cite_11|>": "arxiv-52254", "<|multi_cite_12_1|>": "ss-2273424", "<|multi_cite_12_2|>": "ss-1018751", "<|multi_cite_12_3|>": "arxiv-26142", "<|multi_cite_12_4|>": "ss-1021519", "<|multi_cite_12_5|>": "ss-1021521", "<|cite_13|>": "ss-1350666", "<|cite_14|>": "ss-1350666", "<|cite_15|>": "arxiv-69018"} |
2303.02141-1 | have demonstrated promising quantitative improvement and new qualitative capabilities with increasing scale <|cite_start|> (Reference: When Do You Need Billions of Words of Pretraining Data?: NLP is currently dominated by general-purpose pretrained language models like RoBERTa, which achieve strong performance on NLU tasks through pretraining on billions of words. But what exact knowledge or skills do Transformer LMs learn from large-scale pretraining that they cannot learn from less data? We adopt four probing methods---classifier probing, information-theoretic probing, unsupervised relative acceptability judgment, and fine-tuning on NLU tasks---and draw learning curves that track the growth of these different measures of linguistic ability with respect to pretraining data volume using the MiniBERTas, a group of RoBERTa models pretrained on 1M, 10M, 100M and 1B words. We find that LMs require only about 10M or 100M words to learn representations that reliably encode most syntactic and semantic features we test. A much larger quantity of data is needed in order to acquire enough commonsense knowledge and other skills required to master typical downstream NLU tasks. The results suggest that, while the ability to encode linguistic features is almost certainly necessary for language understanding, it is likely that other forms of knowledge are the major drivers of recent improvements in language understanding among large pretrained models.) <|cite_end|>. Along with the scaling of model size and data size, the training resources of these foundation models also get outrageous. To accelerate training, we need to sparsify models before training. LTH unveils the possibility to find SNNs at initialization that can match their dense counterparts, even though it uses post-training pruning to find them. At the same time, sparse training <|cite_start|> (Reference: Scalable Training of Artificial Neural Networks with Adaptive Sparse Connectivity inspired by Network Science: Through the success of deep learning in various domains, artificial neural networks are currently among the most used artificial intelligence methods. Taking inspiration from the network properties of biological neural networks (e.g. sparsity, scale-freeness), we argue that (contrary to general practice) artificial neural networks, too, should not have fully-connected layers. Here we propose sparse evolutionary training of artificial neural networks, an algorithm which evolves an initial sparse topology (Erd\H{o}s-R\'enyi random graph) of two consecutive layers of neurons into a scale-free topology, during learning. Our method replaces artificial neural networks fully-connected layers with sparse ones before training, reducing quadratically the number of parameters, with no decrease in accuracy. We demonstrate our claims on restricted Boltzmann machines, multi-layer perceptrons, and convolutional neural networks for unsupervised and supervised learning on 15 datasets. Our approach has the potential to enable artificial neural networks to scale up beyond what is currently possible.) <|cite_end|> <|cite_start|> (Reference: Parameter Efficient Training of Deep Convolutional Neural Networks by Dynamic Sparse Reparameterization: Modern deep neural networks are typically highly overparameterized. Pruning techniques are able to remove a significant fraction of network parameters with little loss in accuracy. Recently, techniques based on dynamic reallocation of non-zero parameters have emerged, allowing direct training of sparse networks without having to pre-train a large dense model. Here we present a novel dynamic sparse reparameterization method that addresses the limitations of previous techniques such as high computational cost and the need for manual configuration of the number of free parameters allocated to each layer. We evaluate the performance of dynamic reallocation methods in training deep convolutional networks and show that our method outperforms previous static and dynamic reparameterization methods, yielding the best accuracy for a fixed parameter budget, on par with accuracies obtained by iteratively pruning a pre-trained dense model. We further investigated the mechanisms underlying the superior generalization performance of the resultant sparse networks. We found that neither the structure, nor the initialization of the non-zero parameters were sufficient to explain the superior performance. Rather, effective learning crucially depended on the continuous exploration of the sparse network structure space during training. Our work suggests that exploring structural degrees of freedom during training is more effective than adding extra parameters to the network.) <|cite_end|> <|cite_start|> (Reference: Sparse Networks from Scratch: Faster Training without Losing Performance: We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. Additionally, we find that sparse momentum is insensitive to the choice of its hyperparameters suggesting that sparse momentum is robust and easy to use.) <|cite_end|> <|cite_start|> (Reference: Rigging the Lottery: Making All Tickets Winners: Many applications require sparse neural networks due to space or inference time restrictions. There is a large body of work on training dense networks to yield sparse networks for inference, but this limits the size of the largest trainable sparse model to that of the largest trainable dense model. In this paper we introduce a method to train sparse neural networks with a fixed parameter count and a fixed computational cost throughout training, without sacrificing accuracy relative to existing dense-to-sparse training methods. Our method updates the topology of the sparse network during training by using parameter magnitudes and infrequent gradient calculations. We show that this approach requires fewer floating-point operations (FLOPs) to achieve a given level of accuracy compared to prior techniques. We demonstrate state-of-the-art sparse training results on a variety of networks and datasets, including ResNet-50, MobileNets on Imagenet-2012, and RNNs on WikiText-103. Finally, we provide some insights into why allowing the topology to change during the optimization can overcome local minima encountered when the topology remains static. Code used in our work can be found in github.com/google-research/rigl.) <|cite_end|> <|cite_start|> (Reference: Powerpropagation: A sparsity inducing weight reparameterisation: The training of sparse neural networks is becoming an increasingly important tool for reducing the computational footprint of models at training and evaluation, as well enabling the effective scaling up of models. Whereas much work over the years has been dedicated to specialised pruning techniques, little attention has been paid to the inherent effect of gradient based training on model sparsity. In this work, we introduce Powerpropagation, a new weight-parameterisation for neural networks that leads to inherently sparse models. Exploiting the behaviour of gradient descent, our method gives rise to weight updates exhibiting a "rich get richer" dynamic, leaving low-magnitude parameters largely unaffected by learning. Models trained in this manner exhibit similar performance, but have a distribution with markedly higher density at zero, allowing more parameters to be pruned safely. Powerpropagation is general, intuitive, cheap and straight-forward to implement and can readily be combined with various other techniques. To highlight its versatility, we explore it in two very different settings: Firstly, following a recent line of work, we investigate its effect on sparse training for resource-constrained settings. Here, we combine Powerpropagation with a traditional weight-pruning technique as well as recent state-of-the-art sparse-to-sparse algorithms, showing superior performance on the ImageNet benchmark. Secondly, we advocate the use of sparsity in overcoming catastrophic forgetting, where compressed representations allow accommodating a large number of tasks at fixed model capacity. In all cases our reparameterisation considerably increases the efficacy of the off-the-shelf methods.) <|cite_end|>was proposed that can train a randomly-initialized sparse neural network from scratch while dynamically optimizing the sparse connectivity with promising performance. Instead of randomly initializing sparse networks, one iteration <|cite_start|> (Reference: Picking Winning Tickets Before Training by Preserving Gradient Flow: Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels.) <|cite_end|>or a few iterations <|cite_start|> (Reference: Pruning neural networks without any data by iteratively conserving synaptic flow: Pruning the parameters of deep neural networks has generated intense interest due to potential savings in time, memory and energy both during training and at test time. Recent works have identified, through an expensive sequence of training and pruning cycles, the existence of winning lottery tickets or sparse trainable subnetworks at initialization. This raises a foundational question: can we identify highly sparse trainable subnetworks at initialization, without ever training, or indeed without ever looking at the data? We provide an affirmative answer to this question through theory driven algorithm design. We first mathematically formulate and experimentally verify a conservation law that explains why existing gradient-based pruning algorithms at initialization suffer from layer-collapse, the premature pruning of an entire layer rendering a network untrainable. This theory also elucidates how layer-collapse can be entirely avoided, motivating a novel pruning algorithm Iterative Synaptic Flow Pruning (SynFlow). This algorithm can be interpreted as preserving the total flow of synaptic strengths through the network at initialization subject to a sparsity constraint. Notably, this algorithm makes no reference to the training data and consistently competes with or outperforms existing state-of-the-art pruning algorithms at initialization over a range of models (VGG and ResNet), datasets (CIFAR-10/100 and Tiny ImageNet), and sparsity constraints (up to 99.99 percent). Thus our data-agnostic pruning algorithm challenges the existing paradigm that, at initialization, data must be used to quantify which synapses are important.) <|cite_end|> <|cite_start|> (Reference: Progressive Skeletonization: Trimming more fat from a network at initialization: Recent studies have shown that skeletonization (pruning parameters) of networks \textit{at initialization} provides all the practical benefits of sparsity both at inference and training time, while only marginally degrading their performance. However, we observe that beyond a certain level of sparsity (approx $95\%$), these approaches fail to preserve the network performance, and to our surprise, in many cases perform even worse than trivial random pruning. To this end, we propose an objective to find a skeletonized network with maximum {\em foresight connection sensitivity} (FORCE) whereby the trainability, in terms of connection sensitivity, of a pruned network is taken into consideration. We then propose two approximate procedures to maximize our objective (1) Iterative SNIP: allows parameters that were unimportant at earlier stages of skeletonization to become important at later stages; and (2) FORCE: iterative process that allows exploration by allowing already pruned parameters to resurrect at later stages of skeletonization. Empirical analyses on a large suite of experiments show that our approach, while providing at least as good a performance as other recent approaches on moderate pruning levels, provides remarkably improved performance on higher pruning levels (could remove up to $99.5\%$ parameters while keeping the networks trainable). Code can be found in https://github.com/naver/force.) <|cite_end|>of training can be utilized to guide the search for sparse networks before training.
\vspace{-2mm}
\subsection{Benchmarking in Sparse Neural Networks}
\vspace{-2mm} <|cite_start|> (Reference: The State of Sparsity in Deep Neural Networks: We rigorously evaluate three state-of-the-art techniques for inducing sparsity in deep neural networks on two large-scale learning tasks: Transformer trained on WMT 2014 English-to-German, and ResNet-50 trained on ImageNet. Across thousands of experiments, we demonstrate that complex techniques (Molchanov et al., 2017; Louizos et al., 2017b) shown to yield high compression rates on smaller datasets perform inconsistently, and that simple magnitude pruning approaches achieve comparable or better results. Additionally, we replicate the experiments performed by (Frankle & Carbin, 2018) and (Liu et al., 2018) at scale and show that unstructured sparse architectures learned through pruning cannot be trained from scratch to the same test set performance as a model trained with joint sparsification and optimization. Together, these results highlight the need for large-scale benchmarks in the field of model compression. We open-source our code, top performing model checkpoints, and results of all hyperparameter configurations to establish rigorous baselines for future work on compression and sparsification.) <|cite_end|>rigorously evaluated variational dropout <|cite_start|> (Reference: Variational Dropout Sparsifies Deep Neural Networks: We explore a recently proposed Variational Dropout technique that provided an elegant Bayesian interpretation to Gaussian Dropout. We extend Variational Dropout to the case when dropout rates are unbounded, propose a way to reduce the variance of the gradient estimator and report first experimental results with individual dropout rates per weight. Interestingly, it leads to extremely sparse solutions both in fully-connected and convolutional layers. This effect is similar to automatic relevance determination effect in empirical Bayes but has a number of advantages. We reduce the number of parameters up to 280 times on LeNet architectures and up to 68 times on VGG-like networks with a negligible decrease of accuracy.) <|cite_end|>, $l_0$ regularizaion, and GMP <|cite_start|> (Reference: To prune, or not to prune: exploring the efficacy of pruning for model compression: Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.) <|cite_end|>on two large-scale tasks. They demonstrated that the appealing performance advantages of variational dropout and $l_0$ regularization cannot generalize to large-scale tasks whereas simple magnitude pruning performs surprisingly well. <|cite_start|> (Reference: Rethinking the Value of Network Pruning: Network pruning is widely used for reducing the heavy inference cost of deep models in low-resource settings. A typical pruning algorithm is a three-stage pipeline, i.e., training (a large model), pruning and fine-tuning. During pruning, according to a certain criterion, redundant weights are pruned and important weights are kept to best preserve the accuracy. In this work, we make several surprising observations which contradict common beliefs. For all state-of-the-art structured pruning algorithms we examined, fine-tuning a pruned model only gives comparable or worse performance than training that model with randomly initialized weights. For pruning algorithms which assume a predefined target network architecture, one can get rid of the full pipeline and directly train the target network from scratch. Our observations are consistent for multiple network architectures, datasets, and tasks, which imply that: 1) training a large, over-parameterized model is often not necessary to obtain an efficient final model, 2) learned "important" weights of the large model are typically not useful for the small pruned model, 3) the pruned architecture itself, rather than a set of inherited "important" weights, is more crucial to the efficiency in the final model, which suggests that in some cases pruning can be useful as an architecture search paradigm. Our results suggest the need for more careful baseline evaluations in future research on structured pruning methods. We also compare with the "Lottery Ticket Hypothesis" (Frankle & Carbin 2019), and find that with optimal learning rate, the "winning ticket" initialization as used in Frankle & Carbin (2019) does not bring improvement over random initialization.) <|cite_end|>examined two pipelines: training from scratch and fine-tuning, concluding that fine-tuning a pruned model only gives comparable or worse performance than training from scratch. <|cite_start|> (Reference: What is the State of Neural Network Pruning?: Neural network pruning---the task of reducing the size of a network by removing parameters---has been the subject of a great deal of work in recent years. We provide a meta-analysis of the literature, including an overview of approaches to pruning and consistent findings in the literature. After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades. To address this situation, we identify issues with current practices, suggest concrete remedies, and introduce ShrinkBench, an open-source framework to facilitate standardized evaluations of pruning methods. We use ShrinkBench to compare various pruning techniques and show that its comprehensive evaluation can prevent common pitfalls when comparing pruning methods.) <|cite_end|>provided a comprehensive literature review on SNNs and found that pruning papers rarely make direct and controlled comparisons. <|cite_start|> (Reference: Pruning Neural Networks at Initialization: Why are We Missing the Mark?: Recent work has explored the possibility of pruning neural networks at initialization. We assess proposals for doing so: SNIP (Lee et al., 2019), GraSP (Wang et al., 2020), SynFlow (Tanaka et al., 2020), and magnitude pruning. Although these methods surpass the trivial baseline of random pruning, they remain below the accuracy of magnitude pruning after training, and we endeavor to understand why. We show that, unlike pruning after training, randomly shuffling the weights these methods prune within each layer or sampling new initial values preserves or improves accuracy. As such, the per-weight pruning decisions made by these methods can be replaced by a per-layer choice of the fraction of weights to prune. This property suggests broader challenges with the underlying pruning heuristics, the desire to prune at initialization, or both.) <|cite_end|>assessed the efficacy of various pruning at initialization approaches and attribute their inferior performance
to their insensitivity to weight shuffling and re-initialization. <|cite_start|> (Reference: The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training: Random pruning is arguably the most naive way to attain sparsity in neural networks, but has been deemed uncompetitive by either post-training pruning or sparse training. In this paper, we focus on sparse training and highlight a perhaps counter-intuitive finding, that random pruning at initialization can be quite powerful for the sparse training of modern neural networks. Without any delicate pruning criteria or carefully pursued sparsity structures, we empirically demonstrate that sparsely training a randomly pruned network from scratch can match the performance of its dense equivalent. There are two key factors that contribute to this revival: (i) the network sizes matter: as the original dense networks grow wider and deeper, the performance of training a randomly pruned sparse network will quickly grow to matching that of its dense equivalent, even at high sparsity ratios; (ii) appropriate layer-wise sparsity ratios can be pre-chosen for sparse training, which shows to be another important performance booster. Simple as it looks, a randomly pruned subnetwork of Wide ResNet-50 can be sparsely trained to outperforming a dense Wide ResNet-50, on ImageNet. We also observed such randomly pruned networks outperform dense counterparts in other favorable aspects, such as out-of-distribution detection, uncertainty estimation, and adversarial robustness. Overall, our results strongly suggest there is larger-than-expected room for sparse training at scale, and the benefits of sparsity might be more universal beyond carefully designed pruning. Our source code can be found at https://github.com/VITA-Group/Random_Pruning.) <|cite_end|>re-evaluated the performance of various random pruning before training and found that sparsely training a randomly pruned network from scratch can surprisingly match the performance of its dense equivalent. These papers shed light on the behavior of SNNs and discover important research problems for future work.
\vspace{-0.5em} <|paper_end|> | [
"<|reference_start|> Sparse Networks from Scratch: Faster Training without Losing Performance: We demonstrate the possibility of what we call sparse learning: accelerated training of deep neural networks that maintain sparse weights throughout training while achieving dense performance levels. We accomplish this by developing sparse momentum, an algorithm which uses exponentially smoothed gradients (momentum) to identify layers and weights which reduce the error efficiently. Sparse momentum redistributes pruned weights across layers according to the mean momentum magnitude of each layer. Within a layer, sparse momentum grows weights according to the momentum magnitude of zero-valued weights. We demonstrate state-of-the-art sparse performance on MNIST, CIFAR-10, and ImageNet, decreasing the mean error by a relative 8%, 15%, and 6% compared to other sparse algorithms. Furthermore, we show that sparse momentum reliably reproduces dense performance levels while providing up to 5.61x faster training. In our analysis, ablations show that the benefits of momentum redistribution and growth increase with the depth and size of the network. Additionally, we find that sparse momentum is insensitive to the choice of its hyperparameters suggesting that sparse momentum is robust and easy to use. <|reference_end|>",
"<|reference_start|> Picking Winning Tickets Before Training by Preserving Gradient Flow: Overparameterization has been shown to benefit both the optimization and generalization of neural networks, but large networks are resource hungry at both training and test time. Network pruning can reduce test-time resource requirements, but is typically applied to trained networks and therefore cannot avoid the expensive training process. We aim to prune networks at initialization, thereby saving resources at training time as well. Specifically, we argue that efficient training requires preserving the gradient flow through the network. This leads to a simple but effective pruning criterion we term Gradient Signal Preservation (GraSP). We empirically investigate the effectiveness of the proposed method with extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet and ImageNet, using VGGNet and ResNet architectures. Our method can prune 80% of the weights of a VGG-16 network on ImageNet at initialization, with only a 1.6% drop in top-1 accuracy. Moreover, our method achieves significantly better performance than the baseline at extreme sparsity levels. <|reference_end|>",
"<|reference_start|> Pruning Neural Networks at Initialization: Why are We Missing the Mark?: Recent work has explored the possibility of pruning neural networks at initialization. We assess proposals for doing so: SNIP (Lee et al., 2019), GraSP (Wang et al., 2020), SynFlow (Tanaka et al., 2020), and magnitude pruning. Although these methods surpass the trivial baseline of random pruning, they remain below the accuracy of magnitude pruning after training, and we endeavor to understand why. We show that, unlike pruning after training, randomly shuffling the weights these methods prune within each layer or sampling new initial values preserves or improves accuracy. As such, the per-weight pruning decisions made by these methods can be replaced by a per-layer choice of the fraction of weights to prune. This property suggests broader challenges with the underlying pruning heuristics, the desire to prune at initialization, or both. <|reference_end|>",
"<|reference_start|> The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training: Random pruning is arguably the most naive way to attain sparsity in neural networks, but has been deemed uncompetitive by either post-training pruning or sparse training. In this paper, we focus on sparse training and highlight a perhaps counter-intuitive finding, that random pruning at initialization can be quite powerful for the sparse training of modern neural networks. Without any delicate pruning criteria or carefully pursued sparsity structures, we empirically demonstrate that sparsely training a randomly pruned network from scratch can match the performance of its dense equivalent. There are two key factors that contribute to this revival: (i) the network sizes matter: as the original dense networks grow wider and deeper, the performance of training a randomly pruned sparse network will quickly grow to matching that of its dense equivalent, even at high sparsity ratios; (ii) appropriate layer-wise sparsity ratios can be pre-chosen for sparse training, which shows to be another important performance booster. Simple as it looks, a randomly pruned subnetwork of Wide ResNet-50 can be sparsely trained to outperforming a dense Wide ResNet-50, on ImageNet. We also observed such randomly pruned networks outperform dense counterparts in other favorable aspects, such as out-of-distribution detection, uncertainty estimation, and adversarial robustness. Overall, our results strongly suggest there is larger-than-expected room for sparse training at scale, and the benefits of sparsity might be more universal beyond carefully designed pruning. Our source code can be found at https://github.com/VITA-Group/Random_Pruning. <|reference_end|>"
] | [
3,
6,
14,
15
] | {"<|cite_1|>": "ss-1658833", "<|multi_cite_2_1|>": "arxiv-151068", "<|multi_cite_2_2|>": "ss-832115", "<|multi_cite_2_3|>": "arxiv-280484", "<|multi_cite_2_4|>": "arxiv-250101", "<|multi_cite_3_1|>": "arxiv-136506", "<|multi_cite_3_2|>": "arxiv-192853", "<|multi_cite_3_3|>": "arxiv-349552", "<|multi_cite_4_1|>": "arxiv-129376", "<|multi_cite_4_3|>": "arxiv-397183", "<|multi_cite_5_1|>": "arxiv-177208", "<|multi_cite_5_2|>": "ss-1240855", "<|multi_cite_5_3|>": "arxiv-400366", "<|multi_cite_6_1|>": "arxiv-345988", "<|multi_cite_6_2|>": "arxiv-348986", "<|cite_7|>": "arxiv-351422", "<|multi_cite_8_1|>": "arxiv-192853", "<|multi_cite_8_2|>": "arxiv-151068", "<|cite_9|>": "ss-848674", "<|cite_10|>": "ss-779980", "<|cite_11|>": "ss-710402", "<|cite_12|>": "arxiv-155671", "<|multi_cite_13_1|>": "arxiv-417317", "<|multi_cite_13_2|>": "arxiv-404752", "<|multi_cite_13_3|>": "arxiv-346817", "<|multi_cite_14_1|>": "arxiv-185097", "<|multi_cite_14_2|>": "arxiv-452915", "<|cite_15|>": "arxiv-405464", "<|multi_cite_28_1|>": "ss-1565974", "<|multi_cite_28_2|>": "ss-1117443", "<|cite_29|>": "arxiv-84906", "<|multi_cite_30_1|>": "arxiv-110533", "<|multi_cite_30_2|>": "ss-832115", "<|multi_cite_30_3|>": "arxiv-374124", "<|multi_cite_31_1|>": "ss-1117443", "<|multi_cite_31_2|>": "ss-983590", "<|multi_cite_31_3|>": "arxiv-124717", "<|multi_cite_16_1|>": "ss-823196", "<|multi_cite_16_2|>": "arxiv-204244", "<|multi_cite_16_3|>": "arxiv-262397", "<|cite_32|>": "arxiv-252075", "<|multi_cite_17_1|>": "ss-832115", "<|multi_cite_17_2|>": "arxiv-280484", "<|multi_cite_17_3|>": "arxiv-380272", "<|multi_cite_17_4|>": "arxiv-405464", "<|multi_cite_17_5|>": "ss-1240456", "<|multi_cite_17_6|>": "arxiv-366068", "<|multi_cite_17_7|>": "arxiv-429561", "<|multi_cite_17_8|>": "arxiv-353604", "<|cite_18|>": "ss-1208490", "<|multi_cite_33_1|>": "arxiv-136506", "<|multi_cite_33_2|>": "arxiv-192853", "<|multi_cite_33_3|>": "arxiv-271456", "<|multi_cite_33_4|>": "arxiv-349552", "<|multi_cite_19_2|>": "arxiv-159617", "<|multi_cite_19_3|>": "arxiv-238549", "<|multi_cite_20_1|>": "ss-832115", "<|multi_cite_20_2|>": "arxiv-411079", "<|multi_cite_20_3|>": "arxiv-412781", "<|cite_21|>": "arxiv-302595", "<|multi_cite_22_1|>": "arxiv-129376", "<|multi_cite_22_2|>": "arxiv-191698", "<|multi_cite_22_3|>": "arxiv-213910", "<|multi_cite_22_4|>": "arxiv-236198", "<|multi_cite_22_6|>": "arxiv-370803", "<|multi_cite_23_2|>": "arxiv-248834", "<|multi_cite_24_1|>": "arxiv-270655", "<|multi_cite_24_2|>": "arxiv-272261", "<|cite_34|>": "arxiv-192853", "<|cite_25|>": "arxiv-114686", "<|cite_27|>": "arxiv-136506", "<|cite_35|>": "arxiv-175999", "<|cite_36|>": "arxiv-252302", "<|cite_37|>": "arxiv-290585", "<|cite_38|>": "arxiv-397183"} |
2404.12639 | <|paper_start|> Title: Single-Task Continual Offline Reinforcement Learning
Abstract: Single-Task Continual Offline Reinforcement Learning: In this paper, we study the continual learning problem of single-task offline reinforcement learning. In the past, continual reinforcement learning usually only dealt with multitasking, that is, learning multiple related or unrelated tasks in a row, but once each learned task was learned, it was not relearned, but only used in subsequent processes. However, offline reinforcement learning tasks require the continuously learning of multiple different datasets for the same task. Existing algorithms will try their best to achieve the best results in each offline dataset they have learned and the skills of the network will overwrite the high-quality datasets that have been learned after learning the subsequent poor datasets. On the other hand, if too much emphasis is placed on stability, the network will learn the subsequent better dataset after learning the poor offline dataset, and the problem of insufficient plasticity and non-learning will occur. How to design a strategy that can always preserve the best performance for each state in the data that has been learned is a new challenge and the focus of this study. Therefore, this study proposes a new algorithm, called Ensemble Offline Reinforcement Learning Based on Experience Replay, which introduces multiple value networks to learn the same dataset and judge whether the strategy has been learned by the discrete degree of the value network, to improve the performance of the network in single-task offline reinforcement learning.
Introduction
Existing approaches to continual reinforcement learning (RL) typically study agents that continuously learn multiple tasks and aim to achieve the best possible performance on each task, which could be named "multi-task continual learning". Specifically, it can be divided into three scenarios: task-incremental learning, domain-incremental learning, and class-incremental learning <|cite_start|> (Reference: Three scenarios for continual learning: Standard artificial neural networks suffer from the well-known issue of catastrophic forgetting, making continual or lifelong learning difficult for machine learning. In recent years, numerous methods have been proposed for continual learning, but due to differences in evaluation protocols it is difficult to directly compare their performance. To enable more structured comparisons, we describe three continual learning scenarios based on whether at test time task identity is provided and--in case it is not--whether it must be inferred. Any sequence of well-defined tasks can be performed according to each scenario. Using the split and permuted MNIST task protocols, for each scenario we carry out an extensive comparison of recently proposed continual learning methods. We demonstrate substantial differences between the three scenarios in terms of difficulty and in terms of how efficient different methods are. In particular, when task identity must be inferred (i.e., class incremental learning), we find that regularization-based approaches (e.g., elastic weight consolidation) fail and that replaying representations of previous experiences seems required for solving this scenario.) <|cite_end|>. In all of these scenarios, unless the continual learning method will or will not know the borders between tasks, they are learning multiple tasks sequentially. Defining these scenarios is meaningful in tasks such as image classification or semantic segmentation because, in these tasks, all the data are sampled independently, so schema shifts will only occur when the task changes. However, defining tasks is difficult in reinforcement learning. Although tasks can be defined in various ways, in real-world environments, the states of environments, tasks, or situations of the robots are often not incidental, mutable, or distinguishable, but rather continuous, gradual, and indistinct. Thus, formulating continual RL as a multi-task problem encounters difficulties. Meanwhile, many previous works have shown that agents trained in more extensive, stochastic environments exhibit superior performance, learning speed, and robustness when transferred to the real world compared to algorithms focused on a single concrete task. If continual RL is defined as a multi-task continual learning problem, the agent cannot acquire sufficient effectiveness on each task with insufficient randomness and learning time. Therefore, in this work, we define the single-task continual learning creatively. As illustrated in Fig.~\ref{fig:STCORL_single_task}, in single-task continual learning, the agent continuously learns the different subspaces of states and actions of the same task. It can adapt to the gradual changes in the environment, agent, and task goals through the generalization and robustness of neural networks and accommodate the latest environmental changes <|cite_start|> (Reference: Prediction and control.: ) <|cite_end|>.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{pictures/STCORL.pdf}
\caption{The diagram of the single task continual learning. The algorithm needs to learn a sequence of datasets of a single task sequentially and expect to perform best on the task as a whole, rather than on individual datasets}
\label{fig:STCORL_single_task}
\end{figure}
Specifically, in the continual offline RL setting, proposing the single-task continual RL is especially crucial, and this is the topic we focus on in this work. In this paradigm, the agent no longer directly interacts with the environment to learn but instead learns from offline datasets. In single-task offline RL with multiple datasets, each offline dataset belongs to a subspace of this RL task.
These offline datasets may originate from data collected by other people or robots interacting with the environment through alternative means, such as many existing offline reinforcement learning datasets collected through human control, online reinforcement learning, or random wandering <|cite_start|> (Reference: Winning Solution of Real Robot Challenge III: This report introduces our winning solution of the real-robot phase of the Real Robot Challenge (RRC) 2022. The goal of this year’s challenge is to solve dexterous manipulation tasks with offline reinforcement learning (RL) or imitation learning. To this end, participants are provided with datasets containing dozens of hours of robotic data. For each task an expert 1 dataset and a mixed dataset are provided. In our experiments, when learning from the expert datasets, we find standard Behavioral Cloning (BC) outperforms state-of-the-art offline RL algorithms. When learning from the mixed datasets, BC performs poorly, as expected, while surprisingly offline RL performs suboptimally, failing to match the average performance of the baseline model used for collecting the datasets. To remedy this, motivated by the strong performance of BC on the expert datasets we elect to use a semi-supervised classification technique to filter the subset of expert data out from the mixed datasets, and subsequently perform BC on this extracted subset of data. To further improve results, in all settings we use a simple data augmentation method that exploits the geometric symmetry of the RRC physical robotic environment. Our submitted BC policies each surpass the mean return 2 of their respective raw datasets, and the policies trained on the filtered mixed datasets come close to matching the performances of those trained on the expert datasets.) <|cite_end|> <|cite_start|> (Reference: A Real-World Quadrupedal Locomotion Benchmark for Offline Reinforcement Learning: Online reinforcement learning (RL) methods are often data-inefficient or unreliable, making them difficult to train on real robotic hardware, especially quadruped robots. So learning robotic tasks from pre-collected data is a promising direction. Agile and stable legged locomotion remains an open issue in its general form. Analogous to the rapid progress of supervised learning in recent years, the combination of offline reinforcement learning (ORL) and realistic datasets has the potential to make breakthroughs in this challenging field. To facilitate the ORL research for real-world applications, we benchmark ten ORL algorithms in the realistic quadrupedal locomotion dataset. The dataset is collected by the classical model predictive control (MPC) method, rather than the online RL method commonly utilized by previous ORL benchmarks. Extensive experimental results show that the best-performing ORL algorithms can achieve competitive performance compared with the online RL, and even surpass it in some tasks. However, there is still a gap between the learning-based methods and classical MPC, especially in terms of stability and task response accuracy. Our benchmark can provide a fertile ground for future application-oriented ORL research.) <|cite_end|>. Additionally, considering that reward functions represented by task goals in RL can often be computed from states and actions without interacting with the environment, offline RL datasets can also be obtained by modifying the rewards in datasets collected for other tasks.
To make the best possible use of all offline datasets without evaluating their quality, we propose the single-task continual offline reinforcement learning (STCORL) problem. Specifically, for a continual offline RL task, the agent needs to continuously learn multiple datasets related to this task and achieve the best possible performance at each learning stage.
However, STCORL faces challenges that would not appear in traditional continual learning. Simply applying conventional continual learning algorithms to STCORL cannot adequately solve the problem. The root cause is the quality of the datasets.
As a single-task continual learning problem, STCORL needs to solve the same-input-different-output problem or overwriting problem <|cite_start|> (Reference: Are the Z(3985) and Z(4000) the same state?: ) <|cite_end|>. Specifically, assuming the new dataset contains data with the same states as inputs as the old dataset, the network must determine the relationship between the actions from the new data and the outputs of existing network. If the quality of new data exceeds the current outputs of the network, the network should prioritize plasticity to strengthen learning ability on the new task. In contrast, if the network's current outputs demonstrate superior quality, stability should take precedence to consolidate mastery of the old task. Additionally, selective learning is essential for single-task continual learning. Usually, each dataset contains both high-quality and low-quality parts. Algorithms should assess datasets at the individual data level instead of the entire dataset level. Moreover, appraising data values themselves, specifically state Q-values or values in STCORL, also requires continual memory.
In summary, single-task continual learning necessitates simultaneously implementing both learning and not learning. Considering that boosting model stability is a typical approach in multitask reinforcement learning since inadequate plasticity on new tasks can usually resolve through sustained learning <|cite_start|> (Reference: Gradient Episodic Memory for Continual Learning: One major obstacle towards AI is the poor ability of models to solve new problems quicker, and without forgetting previously acquired knowledge. To better understand this issue, we study the problem of continual learning, where the model observes, once and one by one, examples concerning a sequence of tasks. First, we propose a set of metrics to evaluate models learning over a continuum of data. These metrics characterize models not only by their test accuracy, but also in terms of their ability to transfer knowledge across tasks. Second, we propose a model for continual learning, called Gradient Episodic Memory (GEM) that alleviates forgetting, while allowing beneficial transfer of knowledge to previous tasks. Our experiments on variants of the MNIST and CIFAR-100 datasets demonstrate the strong performance of GEM when compared to the state-of-the-art.) <|cite_end|>, balancing both new and old tasks is indispensable in single-task continual learning. Therefore, single-task continual learning poses greater challenges than multitask continual reinforcement learning.
Specifically in STCORL, the prevailing effective offline reinforcement learning algorithms adopting behavior cloning (BC) and conservative learning can introduce more problems. To address the core overestimation problem of the offline RL, these algorithms need to suppress and constrain the learned policy to more closely approximate the behavior policy across all states. This learning strategy initially aims to avoid algorithms learning too many out-of-distribution (OOD) actions that introduce risks by exceeding the distribution. However, in STCORL, such suppression and constraints will cause the network to always conform with the latest learned offline dataset and to forget knowledge learned from previously seen datasets. Skills learned from higher-quality datasets will be suppressed as overestimated OOD data in subsequent datasets. We terms the problem of abandoning skills acquired from old datasets when learning new datasets as active forgetting.
To solve the active forgetting problem, this chapter proposes a new offline reinforcement learning algorithm based on <|cite_start|> (Reference: Offline Reinforcement Learning: We study a novel setting in offline reinforcement learning (RL) where a number of distributed machines jointly cooperate to solve the problem but only one single round of communication is allowed and there is a budget constraint on the total number of information (in terms of bits) that each machine can send out. For value function prediction in contextual bandits, and both episodic and non-episodic MDPs, we establish information-theoretic lower bounds on the minimax risk for distributed statistical estimators; this reveals the minimum amount of communication required by any offline RL algorithms. Specifically, for contextual bandits, we show that the number of bits must scale at least as Ω ( 𝐴𝐶 ) to match the centralised minimax optimal rate, where 𝐴 is the number of actions and 𝐶 is the context dimension; meanwhile, we reach similar results in the MDP settings. Furthermore, we develop learning algorithms based on least-squares estimates and Monte-Carlo return estimates and provide a sharp analysis showing that they can achieve optimal risk up to logarithmic factors. Additionally, we also show that temporal difference is unable to efficiently utilise information from all available devices under the single-round communication setting due to the initial bias of this method. To our best knowledge, this paper presents the first minimax lower bounds for distributed offline RL problems.) <|cite_end|> and <|cite_start|> (Reference: Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble: Offline reinforcement learning (offline RL), which aims to find an optimal policy from a previously collected static dataset, bears algorithmic difficulties due to function approximation errors from out-of-distribution (OOD) data points. To this end, offline RL algorithms adopt either a constraint or a penalty term that explicitly guides the policy to stay close to the given dataset. However, prior methods typically require accurate estimation of the behavior policy or sampling from OOD data points, which themselves can be a non-trivial problem. Moreover, these methods under-utilize the generalization ability of deep neural networks and often fall into suboptimal solutions too close to the given dataset. In this work, we propose an uncertainty-based offline RL method that takes into account the confidence of the Q-value prediction and does not require any estimation or sampling of the data distribution. We show that the clipped Q-learning, a technique widely used in online RL, can be leveraged to successfully penalize OOD data points with high prediction uncertainties. Surprisingly, we find that it is possible to substantially outperform existing offline RL methods on various tasks by simply increasing the number of Q-networks along with the clipped Q-learning. Based on this observation, we propose an ensemble-diversified actor-critic algorithm that reduces the number of required ensemble networks down to a tenth compared to the naive ensemble while achieving state-of-the-art performance on most of the D4RL benchmarks considered.) <|cite_end|>, called experience-replay-based ensemble implicit Q-learning (EREIQL), making it more amenable to single-task offline reinforcement learning. Specifically, EREIQL introduces an ensemble value function. By initializing multiple value functions, EREIQL assigns a sufficiently low value to each state, initializing them. Although EREIQL does not adopt any conservative strategies leading to active forgetting, unseen states will maintain the initialized low values during learning, while learned states will sustain the highest learned values through expectile regression from <|cite_start|> (Reference: Offline Reinforcement Learning: We study a novel setting in offline reinforcement learning (RL) where a number of distributed machines jointly cooperate to solve the problem but only one single round of communication is allowed and there is a budget constraint on the total number of information (in terms of bits) that each machine can send out. For value function prediction in contextual bandits, and both episodic and non-episodic MDPs, we establish information-theoretic lower bounds on the minimax risk for distributed statistical estimators; this reveals the minimum amount of communication required by any offline RL algorithms. Specifically, for contextual bandits, we show that the number of bits must scale at least as Ω ( 𝐴𝐶 ) to match the centralised minimax optimal rate, where 𝐴 is the number of actions and 𝐶 is the context dimension; meanwhile, we reach similar results in the MDP settings. Furthermore, we develop learning algorithms based on least-squares estimates and Monte-Carlo return estimates and provide a sharp analysis showing that they can achieve optimal risk up to logarithmic factors. Additionally, we also show that temporal difference is unable to efficiently utilise information from all available devices under the single-round communication setting due to the initial bias of this method. To our best knowledge, this paper presents the first minimax lower bounds for distributed offline RL problems.) <|cite_end|>. Meanwhile, the policy in EREIQL adopts advantage weighted regression, avoiding learning inferior actions with lower Q-values to prevent performance drops. Finally, EREIQL incorporates experience replay to mitigate catastrophic forgetting stemming from continual learning itself.
The main contributions are: 1) We propose a new single-task offline reinforcement learning problem and point out that existing offline reinforcement learning algorithms lead to active forgetting in the STCORL problem; 2) We put forward a new EREIQL algorithm that avoids active forgetting through passively conservative methods; 3) We test the performance of the prevailing continual learning algorithms on STCORL and indicates experience replay as the optimal method; 4) Experiments on various datasets show EREIQL proposed here can achieve superior performance on different STCORL tasks.
Related Work
\subsection{Continual Learning}
Continual learning aims to use a single network to continuously learn multiple tasks while consuming acceptable resources, enabling excellent performance across tasks. These algorithms can be broadly categorized into three classes: rehearsal-based, regularization-based, and dynamic-architecture-based.
Regularization-based continual learning methods mitigate forgetting by constraining the change rates of essential parameters to be minor while allowing insignificant parameters to vary greatly when learning new tasks. Examples include <|cite_start|> (Reference: Overcoming catastrophic forgetting in neural networks: The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Neural networks are not, in general, capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks which they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on the MNIST hand written digit dataset and by learning several Atari 2600 games sequentially.) <|cite_end|> <|cite_start|> (Reference: Continual Learning Through Synaptic Intelligence: While deep learning has led to remarkable advances across diverse applications, it struggles in domains where the data distribution changes over the course of learning. In stark contrast, biological neural networks continually adapt to changing domains, possibly by leveraging complex molecular machinery to solve many tasks simultaneously. In this study, we introduce intelligent synapses that bring some of this biological complexity into artificial neural networks. Each synapse accumulates task relevant information over time, and exploits this information to rapidly store new memories without forgetting old ones. We evaluate our approach on continual learning of classification tasks, and show that it dramatically reduces forgetting while maintaining computational efficiency.) <|cite_end|> <|cite_start|> (Reference: Riemannian Walk for Incremental Learning: Understanding Forgetting and Intransigence: ) <|cite_end|>. Dynamic architecture-based methods reserve separate parameters for each task, such as <|cite_start|> (Reference: Forget-free Continual Learning with Winning Subnetworks: Inspired by Lottery Ticket Hypothesis that competitive subnetworks exist within a dense network, we propose a continual learning method referred to as Winning SubNetworks (WSN) which sequentially learns and selects an optimal sub-network for each task. Specifically, WSN jointly learns the model weights and task-adaptive binary masks pertaining to subnetworks associated with each task whilst attempting to select a small set of weights to be activated (winning ticket) by reusing weights of the prior subnet-works. The proposed method is inherently immune to catastrophic forgetting as each selected subnetwork model does not infringe upon other subnetworks. Binary masks spawned per winning ticket are encoded into one N-bit binary digit mask, then compressed using Huffman coding for a sub-linear increase in network capacity with respect to the number of tasks. Code is available at https://github.com/ihaeyong/WSN .) <|cite_end|> <|cite_start|> (Reference: {{CLR: 本文从市场细分方法的角度,打破了传统的根据样本在心理感知或偏好等多个变量的距离进行细分的思路,采用一种新的聚类回归分析方法基于变量间的因果关系对顾客进行细分,不仅可以把顾客有效地划分成具有不同特点的群体,而且可以根据不同要素的因果关系确定不同群体中的关键影响要素。本文以电热水器行业顾客满意度研究为例,根据不同要素对满意度影响的差异进行市场细分。分析表明,采用这种方法,企业可以在经营管理过程中通过差异化策略获得竞争优势。) <|cite_end|>. As our work centers on rehearsal-based methods, the other two categories are not recounted here.
Rehearsal-based continual learning maintains performance on previous tasks by retaining some data from them in a replay buffer and using it when learning new tasks. The first critical issue here is constructing the replay buffer. Related research includes randomly storing <|cite_start|> (Reference: Rainbow Memory: Continual Learning with a Memory of Diverse Samples: Continual learning is a realistic learning scenario for AI models. Prevalent scenario of continual learning, however, assumes disjoint sets of classes as tasks and is less realistic rather artificial. Instead, we focus on ‘blurry’ task boundary; where tasks shares classes and is more realistic and practical. To address such task, we argue the importance of diversity of samples in an episodic memory. To enhance the sample diversity in the memory, we propose a novel memory management strategy based on per-sample classification uncertainty and data augmentation, named Rainbow Memory (RM). With extensive empirical validations on MNIST, CIFAR10, CIFAR100, and ImageNet datasets, we show that the proposed method significantly improves the accuracy in blurry continual learning setups, outperforming state of the arts by large margins despite its simplicity. Code and data splits will be available in https://github.com/clovaai/rainbow-memory.) <|cite_end|> data from different tasks and selective storing <|cite_start|> (Reference: Online Coreset Selection for Rehearsal-based Continual Learning: A dataset is a shred of crucial evidence to describe a task. However, each data point in the dataset does not have the same potential, as some of the data points can be more representative or informative than others. This unequal importance among the data points may have a large impact in rehearsal-based continual learning, where we store a subset of the training examples (coreset) to be replayed later to alleviate catastrophic forgetting. In continual learning, the quality of the samples stored in the coreset directly affects the model's effectiveness and efficiency. The coreset selection problem becomes even more important under realistic settings, such as imbalanced continual learning or noisy data scenarios. To tackle this problem, we propose Online Coreset Selection (OCS), a simple yet effective method that selects the most representative and informative coreset at each iteration and trains them in an online manner. Our proposed method maximizes the model's adaptation to a current dataset while selecting high-affinity samples to past tasks, which directly inhibits catastrophic forgetting. We validate the effectiveness of our coreset selection mechanism over various standard, imbalanced, and noisy datasets against strong continual learning baselines, demonstrating that it improves task adaptation and prevents catastrophic forgetting in a sample-efficient manner.) <|cite_end|> <|cite_start|> (Reference: Selective Replay Enhances Learning in Online Continual Analogical Reasoning: In continual learning, a system learns from non-stationary data streams or batches without catastrophic forgetting. While this problem has been heavily studied in supervised image classification and reinforcement learning, continual learning in neural networks designed for abstract reasoning has not yet been studied. Here, we study continual learning of analogical reasoning. Analogical reasoning tests such as Raven's Progressive Matrices (RPMs) are commonly used to measure non-verbal abstract reasoning in humans, and recently offline neural networks for the RPM problem have been proposed. In this paper, we establish experimental baselines, protocols, and forward and backward transfer metrics to evaluate continual learners on RPMs. We employ experience replay to mitigate catastrophic forgetting. Prior work using replay for image classification tasks has found that selectively choosing the samples to replay offers little, if any, benefit over random selection. In contrast, we find that selective replay can significantly outperform random selection for the RPM task1.) <|cite_end|> based on characteristics like value, uniqueness, and representativeness. Another focal research direction is leveraging selected data, most commonly by blending it with new task data into fresh batches for learning <|cite_start|> (Reference: Prioritized Experience Replay: Experience replay lets online reinforcement learning agents remember and reuse experiences from the past. In prior work, experience transitions were uniformly sampled from a replay memory. However, this approach simply replays transitions at the same frequency that they were originally experienced, regardless of their significance. In this paper we develop a framework for prioritizing experience, so as to replay important transitions more frequently, and therefore learn more efficiently. We use prioritized experience replay in Deep Q-Networks (DQN), a reinforcement learning algorithm that achieved human-level performance across many Atari games. DQN with prioritized experience replay achieves a new state-of-the-art, outperforming DQN with uniform replay on 41 out of 49 games.) <|cite_end|> or distilling knowledge from old data and retained old networks <|cite_start|> (Reference: Always Be Dreaming: A New Approach for Data-Free Class-Incremental Learning: Modern computer vision applications suffer from catastrophic forgetting when incrementally learning new concepts over time. The most successful approaches to alleviate this forgetting require extensive replay of previously seen data, which is problematic when memory constraints or data legality concerns exist. In this work, we consider the high-impact problem of Data-Free Class-Incremental Learning (DFCIL), where an incremental learning agent must learn new concepts over time without storing generators or training data from past tasks. One approach for DFCIL is to replay synthetic images produced by inverting a frozen copy of the learner’s classification model, but we show this approach fails for common class-incremental benchmarks when using standard distillation strategies. We diagnose the cause of this failure and propose a novel incremental distillation strategy for DFCIL, contributing a modified cross-entropy training and importance-weighted feature distillation, and show that our method results in up to a 25.1% increase in final task accuracy (absolute difference) compared to SOTA DFCIL methods for common class-incremental benchmarks. Our method even outperforms several standard replay based methods which store a coreset of images. Our code is available at https://github.com/GT-RIPL/AlwaysBeDreaming-DFCIL) <|cite_end|>. Another line of methods uses a generator to produce samples following the same distribution as the task data instead of directly storing a replay buffer <|cite_start|> (Reference: Learning to Predict Gradients for Semi-Supervised Continual Learning: A key challenge for machine intelligence is to learn new visual concepts without forgetting the previously acquired knowledge. Continual learning (CL) is aimed toward addressing this challenge. However, there still exists a gap between CL and human learning. In particular, humans are able to continually learn from the samples associated with known or unknown labels in their daily lives, whereas existing CL and semi-supervised CL (SSCL) methods assume that the training samples are associated with known labels. Specifically, we are interested in two questions: 1) how to utilize unrelated unlabeled data for the SSCL task and 2) how unlabeled data affect learning and catastrophic forgetting in the CL task. To explore these issues, we formulate a new SSCL method, which can be generically applied to existing CL models. Furthermore, we propose a novel gradient learner to learn from labeled data to predict gradients on unlabeled data. In this way, the unlabeled data can fit into the supervised CL framework. We extensively evaluate the proposed method on mainstream CL methods, adversarial CL (ACL), and semi-supervised learning (SSL) tasks. The proposed method achieves state-of-the-art performance on classification accuracy and backward transfer (BWT) in the CL setting while achieving the desired performance on classification accuracy in the SSL setting. This implies that the unlabeled images can enhance the generalizability of CL models on the predictive ability of unseen data and significantly alleviate catastrophic forgetting. The code is available at https://github.com/luoyan407/grad_prediction.git.) <|cite_end|> <|cite_start|> (Reference: Brain-inspired feature exaggeration in generative replay for continual learning: The catastrophic forgetting of previously learnt classes is one of the main obstacles to the successful development of a reliable and accurate generative continual learning model. When learning new classes, the internal representation of previously learnt ones can often be overwritten, resulting in the model's "memory" of earlier classes being lost over time. Recent developments in neuroscience have uncovered a method through which the brain avoids its own form of memory interference. Applying a targeted exaggeration of the differences between features of similar, yet competing memories, the brain can more easily distinguish and recall them. In this paper, the application of such exaggeration, via the repulsion of replayed samples belonging to competing classes, is explored. Through the development of a 'reconstruction repulsion' loss, this paper presents a new state-of-the-art performance on the classification of early classes in the class-incremental learning dataset CIFAR100.) <|cite_end|>. These approaches avoid occupying space with old task data but increase overall complexity.
\subsection{Continual Reinforcement Learning}
Compared to traditional continual learning, continual RL methods focus on several aspects:
First, how to select data for storage. Related work here includes <|cite_start|> (Reference: Selective Replay Enhances Learning in Online Continual Analogical Reasoning: In continual learning, a system learns from non-stationary data streams or batches without catastrophic forgetting. While this problem has been heavily studied in supervised image classification and reinforcement learning, continual learning in neural networks designed for abstract reasoning has not yet been studied. Here, we study continual learning of analogical reasoning. Analogical reasoning tests such as Raven's Progressive Matrices (RPMs) are commonly used to measure non-verbal abstract reasoning in humans, and recently offline neural networks for the RPM problem have been proposed. In this paper, we establish experimental baselines, protocols, and forward and backward transfer metrics to evaluate continual learners on RPMs. We employ experience replay to mitigate catastrophic forgetting. Prior work using replay for image classification tasks has found that selectively choosing the samples to replay offers little, if any, benefit over random selection. In contrast, we find that selective replay can significantly outperform random selection for the RPM task1.) <|cite_end|> <|cite_start|> (Reference: A Definition of Continual Reinforcement Learning: In a standard view of the reinforcement learning problem, an agent's goal is to efficiently identify a policy that maximizes long-term reward. However, this perspective is based on a restricted view of learning as finding a solution, rather than treating learning as endless adaptation. In contrast, continual reinforcement learning refers to the setting in which the best agents never stop learning. Despite the importance of continual reinforcement learning, the community lacks a simple definition of the problem that highlights its commitments and makes its primary concepts precise and clear. To this end, this paper is dedicated to carefully defining the continual reinforcement learning problem. We formalize the notion of agents that"never stop learning"through a new mathematical language for analyzing and cataloging agents. Using this new language, we define a continual learning agent as one that can be understood as carrying out an implicit search process indefinitely, and continual reinforcement learning as the setting in which the best agents are all continual learning agents. We provide two motivating examples, illustrating that traditional views of multi-task reinforcement learning and continual supervised learning are special cases of our definition. Collectively, these definitions and perspectives formalize many intuitive concepts at the heart of learning, and open new research pathways surrounding continual learning agents.) <|cite_end|>, seeking to retain the most critical data points, using strategies like choosing the highest-value experiences and averaging sampling across the state space where possible.
The second focal issue in continual reinforcement learning is integrating continual learning into reinforcement learning algorithms. Major work here includes <|cite_start|> (Reference: Continual World: A Robotic Benchmark For Continual Reinforcement Learning: Continual learning (CL) -- the ability to continuously learn, building on previously acquired knowledge -- is a natural requirement for long-lived autonomous reinforcement learning (RL) agents. While building such agents, one needs to balance opposing desiderata, such as constraints on capacity and compute, the ability to not catastrophically forget, and to exhibit positive transfer on new tasks. Understanding the right trade-off is conceptually and computationally challenging, which we argue has led the community to overly focus on catastrophic forgetting. In response to these issues, we advocate for the need to prioritize forward transfer and propose Continual World, a benchmark consisting of realistic and meaningfully diverse robotic tasks built on top of Meta-World as a testbed. Following an in-depth empirical evaluation of existing CL methods, we pinpoint their limitations and highlight unique algorithmic challenges in the RL setting. Our benchmark aims to provide a meaningful and computationally inexpensive challenge for the community and thus help better understand the performance of existing and future solutions. Information about the benchmark, including the open-source code, is available at https://sites.google.com/view/continualworld.) <|cite_end|> <|cite_start|> (Reference: Disentangling Transfer in Continual Reinforcement Learning: The ability of continual learning systems to transfer knowledge from previously seen tasks in order to maximize performance on new tasks is a significant challenge for the field, limiting the applicability of continual learning solutions to realistic scenarios. Consequently, this study aims to broaden our understanding of transfer and its driving forces in the specific case of continual reinforcement learning. We adopt SAC as the underlying RL algorithm and Continual World as a suite of continuous control tasks. We systematically study how different components of SAC (the actor and the critic, exploration, and data) affect transfer efficacy, and we provide recommendations regarding various modeling options. The best set of choices, dubbed ClonEx-SAC, is evaluated on the recent Continual World benchmark. ClonEx-SAC achieves 87% final success rate compared to 80% of PackNet, the best method in the benchmark. Moreover, the transfer grows from 0.18 to 0.54 according to the metric provided by Continual World.) <|cite_end|>, showing that continual learning can play a role in RL, working better in actor network than critic network. <|cite_start|> (Reference: Is forgetting less a good inductive bias for forward transfer?: One of the main motivations of studying continual learning is that the problem setting allows a model to accrue knowledge from past tasks to learn new tasks more efficiently. However, recent studies suggest that the key metric that continual learning algorithms optimize, reduction in catastrophic forgetting, does not correlate well with the forward transfer of knowledge. We believe that the conclusion previous works reached is due to the way they measure forward transfer. We argue that the measure of forward transfer to a task should not be affected by the restrictions placed on the continual learner in order to preserve knowledge of previous tasks. Instead, forward transfer should be measured by how easy it is to learn a new task given a set of representations produced by continual learning on previous tasks. Under this notion of forward transfer, we evaluate different continual learning algorithms on a variety of image classification benchmarks. Our results indicate that less forgetful representations lead to a better forward transfer suggesting a strong correlation between retaining past information and learning efficiency on new tasks. Further, we found less forgetful representations to be more diverse and discriminative compared to their forgetful counterparts.) <|cite_end|> notes that although continual learning aims to alleviate forgetting, existing methods that reduce forgetting can simultaneously enhance forward transfer capabilities. For more details, please refer to the survey and foundational work <|cite_start|> (Reference: On Specification Transparency: Toward A Formal Framework for Designer Comprehensibility of Discrete-Event Control Specifications in Finite Automata: In control of discrete-event systems (DESs), specifying control requirements in automata is not a trivial task. For many DES applications, designers are often confronted with the long-standing problem of uncertainty in specification, namely, how do we know that a specification automaton does indeed model the intended control requirement? Toward a formal framework that helps mitigate this uncertainty for designer comprehensibility, in this paper, we introduce and develop a new specification concept of automaton transparency and investigate the problem of maximizing the transparency of specification automata for DESs. In a transparent specification automaton, events that are irrelevant to the specification but can occur in the system are “hidden” in self-loops. Different automata of the same specification on a DES can be associated with different sets of such irrelevant events, and any such automaton is said to be the most transparent if it has an irrelevant event set of maximal cardinality. The transparency maximization problem is theoretically formulated, and a provably correct solution algorithm is obtained. Given a specification automaton for a DES, the transparent specification automaton produced by the algorithm is a more comprehensible structure, essentially showing the precedence ordering among events from a minimal cardinality set that is relevant in modeling some requirement for the DES, and should aid designers in clarifying if the requirement prescribed is the one intended.) <|cite_end|> <|cite_start|> (Reference: A {Definition: others), and because every penny had to be used for food, Mother had to nurse us children through measles, chickenpox, whooping cough, pneumonia, and hepatitis without the assistance of a doctor or prescription drugs—in a time when many of our young friends were dying of these diseases. My invalid grandmother lived for nine years after we moved in with her, and shortly after Grandma's death, Mother gave birth to another child—the first of her five children to be born in a hospital. In spite of her own serious health problems and need for help at home, Mother was adamant about her children's receiving an education, and regardless of our economic status, she always managed to subscribe to at least one magazine and to make books available to us. She herself has never known the joy of going to high school or college, and she has not experienced much ofwhat middle-class America considers absolutely essential for a happy life, but Mae Sutiles Moore is an inspiration to all who meet her. Whether I'm enjoying her homecooked meals, or listening to her sing mountain ballads, or just watching her conquer life with laughter, I am always impressed by her zest for living, and I feel indescribably blessed to call her Mother.) <|cite_end|>.
\subsection{Offline Reinforcement Learning}
Offline reinforcement learning refers to the RL approach where agents learn skills not through interacting with the environment, but from offline datasets of experiences and trajectories collected from other agents or humans. The most critical problem agents need to solve in this learning method is over-estimation.
Initially proposed solutions to this problem constrain the deviation between the learned policy and that in the offline data. These algorithms include <|cite_start|> (Reference: Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning: In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines. Our goal is an algorithm that utilizes only simple and convergent maximum likelihood loss functions, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised learning steps: one to regress onto target values for a value function, and another to regress onto weighted target actions for the policy. The method is simple and general, can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. We provide a theoretical motivation for AWR and analyze its properties when incorporating off-policy data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym benchmark tasks, and show that it achieves competitive performance compared to a number of well-established state-of-the-art RL algorithms. AWR is also able to acquire more effective policies than most off-policy algorithms when learning from purely static datasets with no additional environmental interactions. Furthermore, we demonstrate our algorithm on challenging continuous control tasks with highly complex simulated characters.) <|cite_end|> <|cite_start|> (Reference: Behavior Proximal Policy Optimization: Offline reinforcement learning (RL) is a challenging setting where existing off-policy actor-critic methods perform poorly due to the overestimation of out-of-distribution state-action pairs. Thus, various additional augmentations are proposed to keep the learned policy close to the offline dataset (or the behavior policy). In this work, starting from the analysis of offline monotonic policy improvement, we get a surprising finding that some online on-policy algorithms are naturally able to solve offline RL. Specifically, the inherent conservatism of these on-policy algorithms is exactly what the offline RL method needs to overcome the overestimation. Based on this, we propose Behavior Proximal Policy Optimization (BPPO), which solves offline RL without any extra constraint or regularization introduced compared to PPO. Extensive experiments on the D4RL benchmark indicate this extremely succinct method outperforms state-of-the-art offline RL algorithms. Our implementation is available at https://github.com/Dragon-Zhuang/BPPO.) <|cite_end|>, which incorporate a KL divergence constraint during policy learning, limiting the discrepancy between the agent's policy and offline policy. <|cite_start|> (Reference: Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction: Off-policy reinforcement learning aims to leverage experience collected from prior policies for sample-efficient learning. However, in practice, commonly used off-policy approximate dynamic programming methods based on Q-learning and actor-critic methods are highly sensitive to the data distribution, and can make only limited progress without collecting additional on-policy data. As a step towards more robust off-policy algorithms, we study the setting where the off-policy experience is fixed and there is no further interaction with the environment. We identify bootstrapping error as a key source of instability in current methods. Bootstrapping error is due to bootstrapping from actions that lie outside of the training data distribution, and it accumulates via the Bellman backup operator. We theoretically analyze bootstrapping error, and demonstrate how carefully constraining action selection in the backup can mitigate it. Based on our analysis, we propose a practical algorithm, bootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is able to learn robustly from different off-policy distributions, including random and suboptimal demonstrations, on a range of continuous control tasks.) <|cite_end|> suggests this deviation should emphasize actions whose behavior policy probability is zero, requiring the probability of selecting these actions to be zero too. Further, <|cite_start|> (Reference: A Minimalist Approach to Coordination: In natural languages, there are two types of coordination―structured vs. unstructured coordination. These two coordination constructions are formed by the set-forming operation FORMSET(FS). FS applies to a work space WS, forming the set Y = {X1, …, Xn}, Xi ∈ WS by minimal search, and is so-called ‘the third factor’ compatible, which means that FS comes free. The distinction between structured and unstructured coordination stems from the modes of application of FS. Assuming WS = {α, β, γ}, if FS applies only once forming {α, β, γ}, then the unstructured coordination construction appears but if FS applies twice forming {{α, β}, γ} or {α, {β, γ}}, then the structured coordination construction appears. With the structure building operation, there must be a labeling algorithm LA. Regarding LA, the subcate- gorization feature is counted as a agreement feature. Finally, FS plus language- specific conditions yields merge.) <|cite_end|> shows this deviation can be more simply achieved by appending a behavior cloning term to online reinforcement learning algorithms.
However, as these methods restrict the distance between the learned and behavior policies, they are susceptible to offline data quality <|cite_start|> (Reference: Conservative Offline Distributional Reinforcement Learning: Many reinforcement learning (RL) problems in practice are offline, learning purely from observational data. A key challenge is how to ensure the learned policy is safe, which requires quantifying the risk associated with different actions. In the online setting, distributional RL algorithms do so by learning the distribution over returns (i.e., cumulative rewards) instead of the expected return; beyond quantifying risk, they have also been shown to learn better representations for planning. We propose Conservative Offline Distributional Actor Critic (CODAC), an offline RL algorithm suitable for both risk-neutral and risk-averse domains. CODAC adapts distributional RL to the offline setting by penalizing the predicted quantiles of the return for out-of-distribution actions. We prove that CODAC learns a conservative return distribution -- in particular, for finite MDPs, CODAC converges to an uniform lower bound on the quantiles of the return distribution; our proof relies on a novel analysis of the distributional Bellman operator. In our experiments, on two challenging robot navigation tasks, CODAC successfully learns risk-averse policies using offline data collected purely from risk-neutral agents. Furthermore, CODAC is state-of-the-art on the D4RL MuJoCo benchmark in terms of both expected and risk-sensitive performance.) <|cite_end|>. Another line of RL algorithms tackles this issue by learning a conservative Q function. These include Conservative Q-Learning (CQL) proposed by <|cite_start|> (Reference: Conservative Q-Learning for Offline Reinforcement Learning: Effectively leveraging large, previously collected datasets in reinforcement learning (RL) is a key challenge for large-scale real-world applications. Offline RL algorithms promise to learn effective policies from previously-collected, static datasets without further interaction. However, in practice, offline RL presents a major challenge, and standard off-policy RL methods can fail due to overestimation of values induced by the distributional shift between the dataset and the learned policy, especially when training on complex and multi-modal data distributions. In this paper, we propose conservative Q-learning (CQL), which aims to address these limitations by learning a conservative Q-function such that the expected value of a policy under this Q-function lower-bounds its true value. We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees. In practice, CQL augments the standard Bellman error objective with a simple Q-value regularizer which is straightforward to implement on top of existing deep Q-learning and actor-critic implementations. On both discrete and continuous control domains, we show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return, especially when learning from complex and multi-modal data distributions.) <|cite_end|>. CQL and its successor <|cite_start|> (Reference: Mildly Conservative Q-Learning for Offline Reinforcement Learning: Offline reinforcement learning (RL) defines the task of learning from a static logged dataset without continually interacting with the environment. The distribution shift between the learned policy and the behavior policy makes it necessary for the value function to stay conservative such that out-of-distribution (OOD) actions will not be severely overestimated. However, existing approaches, penalizing the unseen actions or regularizing with the behavior policy, are too pessimistic, which suppresses the generalization of the value function and hinders the performance improvement. This paper explores mild but enough conservatism for offline learning while not harming generalization. We propose Mildly Conservative Q-learning (MCQ), where OOD actions are actively trained by assigning them proper pseudo Q values. We theoretically show that MCQ induces a policy that behaves at least as well as the behavior policy and no erroneous overestimation will occur for OOD actions. Experimental results on the D4RL benchmarks demonstrate that MCQ achieves remarkable performance compared with prior work. Furthermore, MCQ shows superior generalization ability when transferring from offline to online, and significantly outperforms baselines. Our code is publicly available at https://github.com/dmksjfl/MCQ.) <|cite_end|> avoid direct policy constraints, addressing the data quality problem by enforcing lower values for OOD data. Other approaches like <|cite_start|> (Reference: Offline Reinforcement Learning: We study a novel setting in offline reinforcement learning (RL) where a number of distributed machines jointly cooperate to solve the problem but only one single round of communication is allowed and there is a budget constraint on the total number of information (in terms of bits) that each machine can send out. For value function prediction in contextual bandits, and both episodic and non-episodic MDPs, we establish information-theoretic lower bounds on the minimax risk for distributed statistical estimators; this reveals the minimum amount of communication required by any offline RL algorithms. Specifically, for contextual bandits, we show that the number of bits must scale at least as Ω ( 𝐴𝐶 ) to match the centralised minimax optimal rate, where 𝐴 is the number of actions and 𝐶 is the context dimension; meanwhile, we reach similar results in the MDP settings. Furthermore, we develop learning algorithms based on least-squares estimates and Monte-Carlo return estimates and provide a sharp analysis showing that they can achieve optimal risk up to logarithmic factors. Additionally, we also show that temporal difference is unable to efficiently utilise information from all available devices under the single-round communication setting due to the initial bias of this method. To our best knowledge, this paper presents the first minimax lower bounds for distributed offline RL problems.) <|cite_end|> concurrently leverage value and Q-functions, averting over-estimation by using only in-distribution Q-values from the samples. Follow-up work includes <|cite_start|> (Reference: Improving Offline RL by Blending Heuristics: We propose Heuristic Blending (HUBL), a simple performance-improving technique for a broad class of offline RL algorithms based on value bootstrapping. HUBL modifies the Bellman operators used in these algorithms, partially replacing the bootstrapped values with heuristic ones that are estimated with Monte-Carlo returns. For trajectories with higher returns, HUBL relies more on the heuristic values and less on bootstrapping; otherwise, it leans more heavily on bootstrapping. HUBL is very easy to combine with many existing offline RL implementations by relabeling the offline datasets with adjusted rewards and discount factors. We derive a theory that explains HUBL's effect on offline RL as reducing offline RL's complexity and thus increasing its finite-sample performance. Furthermore, we empirically demonstrate that HUBL consistently improves the policy quality of four state-of-the-art bootstrapping-based offline RL algorithms (ATAC, CQL, TD3+BC, and IQL), by 9% on average over 27 datasets of the D4RL and Meta-World benchmarks.) <|cite_end|>, offering a more theoretical explanation and analysis. Finally, instead of reducing Q values of OOD data through constraints, <|cite_start|> (Reference: Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble: Offline reinforcement learning (offline RL), which aims to find an optimal policy from a previously collected static dataset, bears algorithmic difficulties due to function approximation errors from out-of-distribution (OOD) data points. To this end, offline RL algorithms adopt either a constraint or a penalty term that explicitly guides the policy to stay close to the given dataset. However, prior methods typically require accurate estimation of the behavior policy or sampling from OOD data points, which themselves can be a non-trivial problem. Moreover, these methods under-utilize the generalization ability of deep neural networks and often fall into suboptimal solutions too close to the given dataset. In this work, we propose an uncertainty-based offline RL method that takes into account the confidence of the Q-value prediction and does not require any estimation or sampling of the data distribution. We show that the clipped Q-learning, a technique widely used in online RL, can be leveraged to successfully penalize OOD data points with high prediction uncertainties. Surprisingly, we find that it is possible to substantially outperform existing offline RL methods on various tasks by simply increasing the number of Q-networks along with the clipped Q-learning. Based on this observation, we propose an ensemble-diversified actor-critic algorithm that reduces the number of required ensemble networks down to a tenth compared to the naive ensemble while achieving state-of-the-art performance on most of the D4RL benchmarks considered.) <|cite_end|> assigns them lower initial values through ensemble learning for conservative learning. Other related work includes <|cite_start|> (Reference: Why So Pessimistic? Estimating Uncertainties for Offline RL through Ensembles, and Why Their Independence Matters: Motivated by the success of ensembles for uncertainty estimation in supervised learning, we take a renewed look at how ensembles of $Q$-functions can be leveraged as the primary source of pessimism for offline reinforcement learning (RL). We begin by identifying a critical flaw in a popular algorithmic choice used by many ensemble-based RL algorithms, namely the use of shared pessimistic target values when computing each ensemble member's Bellman error. Through theoretical analyses and construction of examples in toy MDPs, we demonstrate that shared pessimistic targets can paradoxically lead to value estimates that are effectively optimistic. Given this result, we propose MSG, a practical offline RL algorithm that trains an ensemble of $Q$-functions with independently computed targets based on completely separate networks, and optimizes a policy with respect to the lower confidence bound of predicted action values. Our experiments on the popular D4RL and RL Unplugged offline RL benchmarks demonstrate that on challenging domains such as antmazes, MSG with deep ensembles surpasses highly well-tuned state-of-the-art methods by a wide margin. Additionally, through ablations on benchmarks domains, we verify the critical significance of using independently trained $Q$-functions, and study the role of ensemble size. Finally, as using separate networks per ensemble member can become computationally costly with larger neural network architectures, we investigate whether efficient ensemble approximations developed for supervised learning can be similarly effective, and demonstrate that they do not match the performance and robustness of MSG with separate networks, highlighting the need for new efforts into efficient uncertainty estimation directed at RL.) <|cite_end|> <|cite_start|> (Reference: Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble: Offline reinforcement learning (offline RL), which aims to find an optimal policy from a previously collected static dataset, bears algorithmic difficulties due to function approximation errors from out-of-distribution (OOD) data points. To this end, offline RL algorithms adopt either a constraint or a penalty term that explicitly guides the policy to stay close to the given dataset. However, prior methods typically require accurate estimation of the behavior policy or sampling from OOD data points, which themselves can be a non-trivial problem. Moreover, these methods under-utilize the generalization ability of deep neural networks and often fall into suboptimal solutions too close to the given dataset. In this work, we propose an uncertainty-based offline RL method that takes into account the confidence of the Q-value prediction and does not require any estimation or sampling of the data distribution. We show that the clipped Q-learning, a technique widely used in online RL, can be leveraged to successfully penalize OOD data points with high prediction uncertainties. Surprisingly, we find that it is possible to substantially outperform existing offline RL methods on various tasks by simply increasing the number of Q-networks along with the clipped Q-learning. Based on this observation, we propose an ensemble-diversified actor-critic algorithm that reduces the number of required ensemble networks down to a tenth compared to the naive ensemble while achieving state-of-the-art performance on most of the D4RL benchmarks considered.) <|cite_end|>. <|paper_end|> | [
"<|reference_start|> On Specification Transparency: Toward A Formal Framework for Designer Comprehensibility of Discrete-Event Control Specifications in Finite Automata: In control of discrete-event systems (DESs), specifying control requirements in automata is not a trivial task. For many DES applications, designers are often confronted with the long-standing problem of uncertainty in specification, namely, how do we know that a specification automaton does indeed model the intended control requirement? Toward a formal framework that helps mitigate this uncertainty for designer comprehensibility, in this paper, we introduce and develop a new specification concept of automaton transparency and investigate the problem of maximizing the transparency of specification automata for DESs. In a transparent specification automaton, events that are irrelevant to the specification but can occur in the system are “hidden” in self-loops. Different automata of the same specification on a DES can be associated with different sets of such irrelevant events, and any such automaton is said to be the most transparent if it has an irrelevant event set of maximal cardinality. The transparency maximization problem is theoretically formulated, and a provably correct solution algorithm is obtained. Given a specification automaton for a DES, the transparent specification automaton produced by the algorithm is a more comprehensible structure, essentially showing the precedence ordering among events from a minimal cardinality set that is relevant in modeling some requirement for the DES, and should aid designers in clarifying if the requirement prescribed is the one intended. <|reference_end|>",
"<|reference_start|> Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction: Off-policy reinforcement learning aims to leverage experience collected from prior policies for sample-efficient learning. However, in practice, commonly used off-policy approximate dynamic programming methods based on Q-learning and actor-critic methods are highly sensitive to the data distribution, and can make only limited progress without collecting additional on-policy data. As a step towards more robust off-policy algorithms, we study the setting where the off-policy experience is fixed and there is no further interaction with the environment. We identify bootstrapping error as a key source of instability in current methods. Bootstrapping error is due to bootstrapping from actions that lie outside of the training data distribution, and it accumulates via the Bellman backup operator. We theoretically analyze bootstrapping error, and demonstrate how carefully constraining action selection in the backup can mitigate it. Based on our analysis, we propose a practical algorithm, bootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is able to learn robustly from different off-policy distributions, including random and suboptimal demonstrations, on a range of continuous control tasks. <|reference_end|>",
"<|reference_start|> Offline Reinforcement Learning: We study a novel setting in offline reinforcement learning (RL) where a number of distributed machines jointly cooperate to solve the problem but only one single round of communication is allowed and there is a budget constraint on the total number of information (in terms of bits) that each machine can send out. For value function prediction in contextual bandits, and both episodic and non-episodic MDPs, we establish information-theoretic lower bounds on the minimax risk for distributed statistical estimators; this reveals the minimum amount of communication required by any offline RL algorithms. Specifically, for contextual bandits, we show that the number of bits must scale at least as Ω ( 𝐴𝐶 ) to match the centralised minimax optimal rate, where 𝐴 is the number of actions and 𝐶 is the context dimension; meanwhile, we reach similar results in the MDP settings. Furthermore, we develop learning algorithms based on least-squares estimates and Monte-Carlo return estimates and provide a sharp analysis showing that they can achieve optimal risk up to logarithmic factors. Additionally, we also show that temporal difference is unable to efficiently utilise information from all available devices under the single-round communication setting due to the initial bias of this method. To our best knowledge, this paper presents the first minimax lower bounds for distributed offline RL problems. <|reference_end|>",
"<|reference_start|> Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble: Offline reinforcement learning (offline RL), which aims to find an optimal policy from a previously collected static dataset, bears algorithmic difficulties due to function approximation errors from out-of-distribution (OOD) data points. To this end, offline RL algorithms adopt either a constraint or a penalty term that explicitly guides the policy to stay close to the given dataset. However, prior methods typically require accurate estimation of the behavior policy or sampling from OOD data points, which themselves can be a non-trivial problem. Moreover, these methods under-utilize the generalization ability of deep neural networks and often fall into suboptimal solutions too close to the given dataset. In this work, we propose an uncertainty-based offline RL method that takes into account the confidence of the Q-value prediction and does not require any estimation or sampling of the data distribution. We show that the clipped Q-learning, a technique widely used in online RL, can be leveraged to successfully penalize OOD data points with high prediction uncertainties. Surprisingly, we find that it is possible to substantially outperform existing offline RL methods on various tasks by simply increasing the number of Q-networks along with the clipped Q-learning. Based on this observation, we propose an ensemble-diversified actor-critic algorithm that reduces the number of required ensemble networks down to a tenth compared to the naive ensemble while achieving state-of-the-art performance on most of the D4RL benchmarks considered. <|reference_end|>"
] | [
26,
30,
35,
39
] | {"<|cite_1|>": "arxiv-200198", "<|cite_3|>": "ss-2230027", "<|multi_cite_4_1|>": "ss-718786", "<|multi_cite_4_2|>": "ss-2230028", "<|cite_6|>": "ss-2230029", "<|cite_7|>": "ss-2173235", "<|cite_8|>": "ss-1536144", "<|cite_9|>": "ss-2080024", "<|cite_10|>": "ss-1536144", "<|multi_cite_11_1|>": "arxiv-111666", "<|multi_cite_11_2|>": "arxiv-118897", "<|multi_cite_11_3|>": "ss-1210776", "<|multi_cite_12_1|>": "ss-2230030", "<|multi_cite_12_3|>": "ss-2230031", "<|multi_cite_13_1|>": "ss-2230032", "<|multi_cite_14_1|>": "ss-2230033", "<|multi_cite_14_2|>": "ss-2230034", "<|cite_15|>": "ss-1939401", "<|cite_16|>": "ss-919831", "<|multi_cite_17_1|>": "ss-2230035", "<|multi_cite_17_2|>": "arxiv-377514", "<|multi_cite_18_1|>": "ss-2230034", "<|multi_cite_18_2|>": "ss-2230036", "<|multi_cite_19_1|>": "ss-1206238", "<|multi_cite_19_2|>": "ss-2230037", "<|cite_20|>": "arxiv-488870", "<|multi_cite_22_1|>": "ss-2230038", "<|multi_cite_22_2|>": "ss-764536", "<|multi_cite_23_1|>": "ss-1280461", "<|multi_cite_23_3|>": "arxiv-483407", "<|cite_24|>": "ss-1295156", "<|cite_25|>": "ss-2553120", "<|cite_26|>": "arxiv-354773", "<|cite_27|>": "ss-982347", "<|cite_28|>": "ss-1839381", "<|cite_29|>": "ss-1536144", "<|cite_30|>": "ss-2291175", "<|cite_31|>": "ss-2080024", "<|multi_cite_32_1|>": "ss-1745516", "<|multi_cite_32_2|>": "ss-2080024"} |
2302.11312 | <|paper_start|> Title: Behavior Proximal Policy Optimization
Abstract: Behavior Proximal Policy Optimization: Offline reinforcement learning (RL) is a challenging setting where existing off-policy actor-critic methods perform poorly due to the overestimation of out-of-distribution state-action pairs. Thus, various additional augmentations are proposed to keep the learned policy close to the offline dataset (or the behavior policy). In this work, starting from the analysis of offline monotonic policy improvement, we get a surprising finding that some online on-policy algorithms are naturally able to solve offline RL. Specifically, the inherent conservatism of these on-policy algorithms is exactly what the offline RL method needs to overcome the overestimation. Based on this, we propose Behavior Proximal Policy Optimization (BPPO), which solves offline RL without any extra constraint or regularization introduced compared to PPO. Extensive experiments on the D4RL benchmark indicate this extremely succinct method outperforms state-of-the-art offline RL algorithms. Our implementation is available at https://github.com/Dragon-Zhuang/BPPO.
Introduction
Typically, reinforcement learning (RL) is thought of as a paradigm for online learning, where the agent
interacts with the environment to collect experience and then uses that to improve itself <|cite_start|> (Reference: Introduction To Reinforcement Learning: From the Publisher:
In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability.) <|cite_end|>.
This online process poses the biggest obstacles to real-world RL applications because of expensive or even risky data collection in some fields (such as navigation <|cite_start|> (Reference: Learning to Navigate in Cities Without a Map: Navigating through unstructured environments is a basic capability of intelligent creatures, and thus is of fundamental interest in the study and development of artificial intelligence. Long-range navigation is a complex cognitive task that relies on developing an internal representation of space, grounded by recognisable landmarks and robust visual processing, that can simultaneously support continuous self-localisation ("I am here") and a representation of the goal ("I am going there"). Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale. Recognising that successful navigation relies on integration of general policies with locale-specific knowledge, we propose a dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple cities. We present an interactive navigation environment that uses Google StreetView for its photographic content and worldwide coverage, and demonstrate that our learning method allows agents to learn to navigate multiple cities and to traverse to target destinations that may be kilometres away. The project webpage http://streetlearn.cc contains a video summarising our research and showing the trained agent in diverse city environments and on the transfer task, the form to request the StreetLearn dataset and links to further resources. The StreetLearn environment code is available at https://github.com/deepmind/streetlearn) <|cite_end|> and healthcare <|cite_start|> (Reference: Reinforcement Learning in Healthcare: A Survey: As a subfield of machine learning, reinforcement learning (RL) aims at empowering one's capabilities in behavioural decision making by using interaction experience with the world and an evaluative feedback. Unlike traditional supervised learning methods that usually rely on one-shot, exhaustive and supervised reward signals, RL tackles with sequential decision making problems with sampled, evaluative and delayed feedback simultaneously. Such distinctive features make RL technique a suitable candidate for developing powerful solutions in a variety of healthcare domains, where diagnosing decisions or treatment regimes are usually characterized by a prolonged and sequential procedure. This survey discusses the broad applications of RL techniques in healthcare domains, in order to provide the research community with systematic understanding of theoretical foundations, enabling methods and techniques, existing challenges, and new insights of this emerging paradigm. By first briefly examining theoretical foundations and key techniques in RL research from efficient and representational directions, we then provide an overview of RL applications in healthcare domains ranging from dynamic treatment regimes in chronic diseases and critical care, automated medical diagnosis from both unstructured and structured clinical data, as well as many other control or scheduling domains that have infiltrated many aspects of a healthcare system. Finally, we summarize the challenges and open issues in current research, and point out some potential solutions and directions for future research.) <|cite_end|>).
As an alternative, offline RL eliminates the online interaction and learns from a fixed dataset,
collected by some arbitrary and possibly unknown process <|cite_start|> (Reference: Continuous Doubly Constrained Batch Reinforcement Learning: Reliant on too many experiments to learn good actions, current Reinforcement Learning (RL) algorithms have limited applicability in real-world settings, which can be too expensive to allow exploration. We propose an algorithm for batch RL, where effective policies are learned using only a fixed offline dataset instead of online interactions with the environment. The limited data in batch RL produces inherent uncertainty in value estimates of states/actions that were insufficiently represented in the training data. This leads to particularly severe extrapolation when our candidate policies diverge from one that generated the data. We propose to mitigate this issue via two straightforward penalties: a policy-constraint to reduce this divergence and a value-constraint that discourages overly optimistic estimates. Over a comprehensive set of 32 continuous-action batch RL benchmarks, our approach compares favorably to state-of-the-art methods, regardless of how the offline data were collected.) <|cite_end|> <|cite_start|> (Reference: D4RL: Datasets for Deep Data-Driven Reinforcement Learning: The offline reinforcement learning (RL) setting (also known as full batch RL), where a policy is learned from a static dataset, is compelling as progress enables RL methods to take advantage of large, previously-collected datasets, much like how the rise of large datasets has fueled results in supervised learning. However, existing online RL benchmarks are not tailored towards the offline setting and existing offline RL benchmarks are restricted to data generated by partially-trained agents, making progress in offline RL difficult to measure. In this work, we introduce benchmarks specifically designed for the offline setting, guided by key properties of datasets relevant to real-world applications of offline RL. With a focus on dataset collection, examples of such properties include: datasets generated via hand-designed controllers and human demonstrators, multitask datasets where an agent performs different tasks in the same environment, and datasets collected with mixtures of policies. By moving beyond simple benchmark tasks and data collected by partially-trained RL agents, we reveal important and unappreciated deficiencies of existing algorithms. To facilitate research, we have released our benchmark tasks and datasets with a comprehensive evaluation of existing algorithms, an evaluation protocol, and open-source examples. This serves as a common starting point for the community to identify shortcomings in existing offline RL methods and a collaborative route for progress in this emerging area.) <|cite_end|>.
The prospect of this data-driven mode <|cite_start|> (Reference: Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems: In this tutorial article, we aim to provide the reader with the conceptual tools needed to get started on research on offline reinforcement learning algorithms: reinforcement learning algorithms that utilize previously collected data, without additional online data collection. Offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines. Effective offline reinforcement learning methods would be able to extract policies with the maximum possible utility out of the available data, thereby allowing automation of a wide range of decision-making domains, from healthcare and education to robotics. However, the limitations of current algorithms make this difficult. We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep reinforcement learning methods, and describe some potential solutions that have been explored in recent work to mitigate these challenges, along with recent applications, and a discussion of perspectives on open problems in the field.) <|cite_end|> is pretty encouraging and has been placed with great expectations for solving RL real-world applications.
Unfortunately, the major superiority of offline RL, the lack of online interaction, also raises another challenge.
The classical off-policy iterative algorithms should be applicable to the offline setting since it is sound to regard offline RL as a more severe off-policy case.
But all of them tend to underperform due to the overestimation of out-of-distribution (shorted as OOD) actions.
In policy evaluation, the $Q$-function will poorly estimate the value of OOD state-action pairs.
This in turn affects the policy improvement, where the agent trends to take the OOD actions with erroneously estimated high values, resulting in low-performance <|cite_start|> (Reference: Off-Policy Deep Reinforcement Learning without Exploration: Many practical applications of reinforcement learning constrain agents to learn from a fixed batch of data which has already been gathered, without offering further possibility for data collection. In this paper, we demonstrate that due to errors introduced by extrapolation, standard off-policy deep reinforcement learning algorithms, such as DQN and DDPG, are incapable of learning with data uncorrelated to the distribution under the current policy, making them ineffective for this fixed batch setting. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space in order to force the agent towards behaving close to on-policy with respect to a subset of the given data. We present the first continuous control deep reinforcement learning algorithm which can learn effectively from arbitrary, fixed batch data, and empirically demonstrate the quality of its behavior in several tasks.) <|cite_end|>.
Thus, some solutions keep the learned policy close to the behavior policy to overcome the overestimation <|cite_start|> (Reference: Off-Policy Deep Reinforcement Learning without Exploration: Many practical applications of reinforcement learning constrain agents to learn from a fixed batch of data which has already been gathered, without offering further possibility for data collection. In this paper, we demonstrate that due to errors introduced by extrapolation, standard off-policy deep reinforcement learning algorithms, such as DQN and DDPG, are incapable of learning with data uncorrelated to the distribution under the current policy, making them ineffective for this fixed batch setting. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space in order to force the agent towards behaving close to on-policy with respect to a subset of the given data. We present the first continuous control deep reinforcement learning algorithm which can learn effectively from arbitrary, fixed batch data, and empirically demonstrate the quality of its behavior in several tasks.) <|cite_end|> <|cite_start|> (Reference: Behavior Regularized Offline Reinforcement Learning: In reinforcement learning (RL) research, it is common to assume access to direct online interactions with the environment. However in many real-world applications, access to the environment is limited to a fixed offline dataset of logged experience. In such settings, standard RL algorithms have been shown to diverge or otherwise yield poor performance. Accordingly, recent work has suggested a number of remedies to these issues. In this work, we introduce a general framework, behavior regularized actor critic (BRAC), to empirically evaluate recently proposed methods as well as a number of simple baselines across a variety of offline continuous control tasks. Surprisingly, we find that many of the technical complexities introduced in recent methods are unnecessary to achieve strong performance. Additional ablations provide insights into which design choices matter most in the offline RL setting.) <|cite_end|>.
Most offline RL algorithms adopt online interactions to select hyperparameters.
This is because offline hyperparameter selection, which selects hyperparameters without online interactions, is always an open problem lacking satisfactory solutions <|cite_start|> (Reference: Hyperparameter Selection for Offline Reinforcement Learning: Offline reinforcement learning (RL purely from logged data) is an important avenue for deploying RL techniques in real-world scenarios. However, existing hyperparameter selection methods for offline RL break the offline assumption by evaluating policies corresponding to each hyperparameter setting in the environment. This online execution is often infeasible and hence undermines the main aim of offline RL. Therefore, in this work, we focus on \textit{offline hyperparameter selection}, i.e. methods for choosing the best policy from a set of many policies trained using different hyperparameters, given only logged data. Through large-scale empirical evaluation we show that: 1) offline RL algorithms are not robust to hyperparameter choices, 2) factors such as the offline RL algorithm and method for estimating Q values can have a big impact on hyperparameter selection, and 3) when we control those factors carefully, we can reliably rank policies across hyperparameter choices, and therefore choose policies which are close to the best policy in the set. Overall, our results present an optimistic view that offline hyperparameter selection is within reach, even in challenging tasks with pixel observations, high dimensional action spaces, and long horizon.) <|cite_end|> <|cite_start|> (Reference: Towards Hyperparameter-free Policy Selection for Offline Reinforcement Learning: How to select between policies and value functions produced by different training algorithms in offline reinforcement learning (RL) -- which is crucial for hyperpa-rameter tuning -- is an important open question. Existing approaches based on off-policy evaluation (OPE) often require additional function approximation and hence hyperparameters, creating a chicken-and-egg situation. In this paper, we design hyperparameter-free algorithms for policy selection based on BVFT [XJ21], a recent theoretical advance in value-function selection, and demonstrate their effectiveness in discrete-action benchmarks such as Atari. To address performance degradation due to poor critics in continuous-action domains, we further combine BVFT with OPE to get the best of both worlds, and obtain a hyperparameter-tuning method for Q-function based OPE with theoretical guarantees as a side product.) <|cite_end|>.
Deploying the policy learned by offline RL is potentially risky in certain areas <|cite_start|> (Reference: Learning to Navigate in Cities Without a Map: Navigating through unstructured environments is a basic capability of intelligent creatures, and thus is of fundamental interest in the study and development of artificial intelligence. Long-range navigation is a complex cognitive task that relies on developing an internal representation of space, grounded by recognisable landmarks and robust visual processing, that can simultaneously support continuous self-localisation ("I am here") and a representation of the goal ("I am going there"). Building upon recent research that applies deep reinforcement learning to maze navigation problems, we present an end-to-end deep reinforcement learning approach that can be applied on a city scale. Recognising that successful navigation relies on integration of general policies with locale-specific knowledge, we propose a dual pathway architecture that allows locale-specific features to be encapsulated, while still enabling transfer to multiple cities. We present an interactive navigation environment that uses Google StreetView for its photographic content and worldwide coverage, and demonstrate that our learning method allows agents to learn to navigate multiple cities and to traverse to target destinations that may be kilometres away. The project webpage http://streetlearn.cc contains a video summarising our research and showing the trained agent in diverse city environments and on the transfer task, the form to request the StreetLearn dataset and links to further resources. The StreetLearn environment code is available at https://github.com/deepmind/streetlearn) <|cite_end|> <|cite_start|> (Reference: Reinforcement Learning in Healthcare: A Survey: As a subfield of machine learning, reinforcement learning (RL) aims at empowering one's capabilities in behavioural decision making by using interaction experience with the world and an evaluative feedback. Unlike traditional supervised learning methods that usually rely on one-shot, exhaustive and supervised reward signals, RL tackles with sequential decision making problems with sampled, evaluative and delayed feedback simultaneously. Such distinctive features make RL technique a suitable candidate for developing powerful solutions in a variety of healthcare domains, where diagnosing decisions or treatment regimes are usually characterized by a prolonged and sequential procedure. This survey discusses the broad applications of RL techniques in healthcare domains, in order to provide the research community with systematic understanding of theoretical foundations, enabling methods and techniques, existing challenges, and new insights of this emerging paradigm. By first briefly examining theoretical foundations and key techniques in RL research from efficient and representational directions, we then provide an overview of RL applications in healthcare domains ranging from dynamic treatment regimes in chronic diseases and critical care, automated medical diagnosis from both unstructured and structured clinical data, as well as many other control or scheduling domains that have infiltrated many aspects of a healthcare system. Finally, we summarize the challenges and open issues in current research, and point out some potential solutions and directions for future research.) <|cite_end|> since the performance is unknown.
However, if the deployed policy can guarantee better performance than the behavior policy, the risk during online interactions will be greatly reduced.
This inspires us to consider how to use offline dataset to improve behavior policy with a monotonic performance guarantee.
We formulate this problem as offline monotonic policy improvement.
To analyze offline monotonic policy improvement, we introduce the Performance Difference Theorem <|cite_start|> (Reference: Approximately Optimal Approximate Reinforcement Learning: ) <|cite_end|>.
During analysis, we find that the offline setting does make the monotonic policy improvement more complicated, but the way to monotonically improve policy remains unchanged.
This indicates the algorithms derived from \textit{online} monotonic policy improvement (such as Proximal Policy Optimization) can also achieve \textit{offline} monotonic policy improvement, which further means PPO can naturally solve offline RL.
Based on this surprising discovery, we propose \textbf{B}ehavior \textbf{P}roximal \textbf{P}olicy \textbf{O}ptimization (BPPO), an offline algorithm that monotonically improves behavior policy in the manner of PPO.
Owing to the inherent conservatism of PPO, BPPO restricts the ratio of learned policy and behavior policy within a certain range, similar to the offline RL methods which make the learned policy close to the behavior policy.
As offline algorithms are becoming more and more sophisticated, TD3+BC <|cite_start|> (Reference: A Minimalist Approach to Offline Reinforcement Learning: Offline reinforcement learning (RL) defines the task of learning from a fixed batch of data. Due to errors in value estimation from out-of-distribution actions, most offline RL algorithms take the approach of constraining or regularizing the policy with the actions contained in the dataset. Built on pre-existing RL algorithms, modifications to make an RL algorithm work offline comes at the cost of additional complexity. Offline RL algorithms introduce new hyperparameters and often leverage secondary components such as generative models, while adjusting the underlying RL algorithm. In this paper we aim to make a deep RL algorithm work while making minimal changes. We find that we can match the performance of state-of-the-art offline RL algorithms by simply adding a behavior cloning term to the policy update of an online RL algorithm and normalizing the data. The resulting algorithm is a simple to implement and tune baseline, while more than halving the overall run time by removing the additional computational overhead of previous methods.) <|cite_end|>, which augments TD3 <|cite_start|> (Reference: Addressing Function Approximation Error in Actor-Critic Methods: In value-based reinforcement learning methods such as deep Q-learning, function approximation errors are known to lead to overestimated value estimates and suboptimal policies. We show that this problem persists in an actor-critic setting and propose novel mechanisms to minimize its effects on both the actor and the critic. Our algorithm builds on Double Q-learning, by taking the minimum value between a pair of critics to limit overestimation. We draw the connection between target networks and overestimation bias, and suggest delaying policy updates to reduce per-update error and further improve performance. We evaluate our method on the suite of OpenAI gym tasks, outperforming the state of the art in every environment tested.) <|cite_end|> with behavior cloning <|cite_start|> (Reference: ALVINN: an autonomous land vehicle in a neural network: The support-vector network is a new leaming machine for two-group classification problems. The machine conceptually implements the following idea: input vectors are non-linearly mapped to a very highdimension feature space. In this feature space a linear decision surface is constructed. Special properties of the decision surface ensures high generalization ability of the learning machine. The idea behind the support-vector network was previously implemented for the restricted case where the training data can be separated without errors. We here extend this result to non-separable training data. High generalization ability of support-vector networks utilizing polynomial input transformations is demonstrated. We also compare the performance of the support-vector network to various classical learning algorithms that all took part in a benchmark study of Optical Character Recognition.) <|cite_end|>, reminds us to revisit the simple alternatives with potentially good performance.
BPPO is such a ``most simple'' alternative without introducing any extra constraint or regularization on the basis of PPO.
Extensive experiments on the D4RL benchmark <|cite_start|> (Reference: D4RL: Datasets for Deep Data-Driven Reinforcement Learning: The offline reinforcement learning (RL) setting (also known as full batch RL), where a policy is learned from a static dataset, is compelling as progress enables RL methods to take advantage of large, previously-collected datasets, much like how the rise of large datasets has fueled results in supervised learning. However, existing online RL benchmarks are not tailored towards the offline setting and existing offline RL benchmarks are restricted to data generated by partially-trained agents, making progress in offline RL difficult to measure. In this work, we introduce benchmarks specifically designed for the offline setting, guided by key properties of datasets relevant to real-world applications of offline RL. With a focus on dataset collection, examples of such properties include: datasets generated via hand-designed controllers and human demonstrators, multitask datasets where an agent performs different tasks in the same environment, and datasets collected with mixtures of policies. By moving beyond simple benchmark tasks and data collected by partially-trained RL agents, we reveal important and unappreciated deficiencies of existing algorithms. To facilitate research, we have released our benchmark tasks and datasets with a comprehensive evaluation of existing algorithms, an evaluation protocol, and open-source examples. This serves as a common starting point for the community to identify shortcomings in existing offline RL methods and a collaborative route for progress in this emerging area.) <|cite_end|> indicate BPPO has outperformed state-of-the-art offline RL algorithms.
Related Work
\paragraph{Offline Reinforcement Learning}
Most of the online off-policy methods fail or underperform in offline RL due to extrapolation error <|cite_start|> (Reference: Off-Policy Deep Reinforcement Learning without Exploration: Many practical applications of reinforcement learning constrain agents to learn from a fixed batch of data which has already been gathered, without offering further possibility for data collection. In this paper, we demonstrate that due to errors introduced by extrapolation, standard off-policy deep reinforcement learning algorithms, such as DQN and DDPG, are incapable of learning with data uncorrelated to the distribution under the current policy, making them ineffective for this fixed batch setting. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space in order to force the agent towards behaving close to on-policy with respect to a subset of the given data. We present the first continuous control deep reinforcement learning algorithm which can learn effectively from arbitrary, fixed batch data, and empirically demonstrate the quality of its behavior in several tasks.) <|cite_end|> or distributional shift <|cite_start|> (Reference: Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems: In this tutorial article, we aim to provide the reader with the conceptual tools needed to get started on research on offline reinforcement learning algorithms: reinforcement learning algorithms that utilize previously collected data, without additional online data collection. Offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines. Effective offline reinforcement learning methods would be able to extract policies with the maximum possible utility out of the available data, thereby allowing automation of a wide range of decision-making domains, from healthcare and education to robotics. However, the limitations of current algorithms make this difficult. We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep reinforcement learning methods, and describe some potential solutions that have been explored in recent work to mitigate these challenges, along with recent applications, and a discussion of perspectives on open problems in the field.) <|cite_end|>.
Thus most offline algorithms typically augment existing off-policy algorithms with a penalty measuring divergence between the policy and the offline data (or behavior policy).
Depending on how to implement this penalty, a variety of methods were proposed such as batch constrained <|cite_start|> (Reference: Off-Policy Deep Reinforcement Learning without Exploration: Many practical applications of reinforcement learning constrain agents to learn from a fixed batch of data which has already been gathered, without offering further possibility for data collection. In this paper, we demonstrate that due to errors introduced by extrapolation, standard off-policy deep reinforcement learning algorithms, such as DQN and DDPG, are incapable of learning with data uncorrelated to the distribution under the current policy, making them ineffective for this fixed batch setting. We introduce a novel class of off-policy algorithms, batch-constrained reinforcement learning, which restricts the action space in order to force the agent towards behaving close to on-policy with respect to a subset of the given data. We present the first continuous control deep reinforcement learning algorithm which can learn effectively from arbitrary, fixed batch data, and empirically demonstrate the quality of its behavior in several tasks.) <|cite_end|>, KL-control <|cite_start|> (Reference: Way Off-Policy Batch Deep Reinforcement Learning of Implicit Human Preferences in Dialog: Most deep reinforcement learning (RL) systems are not able to learn effectively from off-policy data, especially if they cannot explore online in the environment. These are critical shortcomings for applying RL to real-world problems where collecting data is expensive, and models must be tested offline before being deployed to interact with the environment -- e.g. systems that learn from human interaction. Thus, we develop a novel class of off-policy batch RL algorithms, which are able to effectively learn offline, without exploring, from a fixed batch of human interaction data. We leverage models pre-trained on data as a strong prior, and use KL-control to penalize divergence from this prior during RL training. We also use dropout-based uncertainty estimates to lower bound the target Q-values as a more efficient alternative to Double Q-Learning. The algorithms are tested on the problem of open-domain dialog generation -- a challenging reinforcement learning problem with a 20,000-dimensional action space. Using our Way Off-Policy algorithm, we can extract multiple different reward functions post-hoc from collected human interaction data, and learn effectively from all of these. We test the real-world generalization of these systems by deploying them live to converse with humans in an open-domain setting, and demonstrate that our algorithm achieves significant improvements over prior methods in off-policy batch RL.) <|cite_end|> <|cite_start|> (Reference: DARA: Dynamics-Aware Reward Augmentation in Offline Reinforcement Learning: Offline reinforcement learning algorithms promise to be applicable in settings where a fixed dataset is available and no new experience can be acquired. However, such formulation is inevitably offline-data-hungry and, in practice, collecting a large offline dataset for one specific task over one specific environment is also costly and laborious. In this paper, we thus 1) formulate the offline dynamics adaptation by using (source) offline data collected from another dynamics to relax the requirement for the extensive (target) offline data, 2) characterize the dynamics shift problem in which prior offline methods do not scale well, and 3) derive a simple dynamics-aware reward augmentation (DARA) framework from both model-free and model-based offline settings. Specifically, DARA emphasizes learning from those source transition pairs that are adaptive for the target environment and mitigates the offline dynamics shift by characterizing state-action-next-state pairs instead of the typical state-action distribution sketched by prior offline RL methods. The experimental evaluation demonstrates that DARA, by augmenting rewards in the source offline dataset, can acquire an adaptive policy for the target environment and yet significantly reduce the requirement of target offline data. With only modest amounts of target offline data, our performance consistently outperforms the prior offline RL methods in both simulated and real-world tasks.) <|cite_end|>, behavior-regularized <|cite_start|> (Reference: Behavior Regularized Offline Reinforcement Learning: In reinforcement learning (RL) research, it is common to assume access to direct online interactions with the environment. However in many real-world applications, access to the environment is limited to a fixed offline dataset of logged experience. In such settings, standard RL algorithms have been shown to diverge or otherwise yield poor performance. Accordingly, recent work has suggested a number of remedies to these issues. In this work, we introduce a general framework, behavior regularized actor critic (BRAC), to empirically evaluate recently proposed methods as well as a number of simple baselines across a variety of offline continuous control tasks. Surprisingly, we find that many of the technical complexities introduced in recent methods are unnecessary to achieve strong performance. Additional ablations provide insights into which design choices matter most in the offline RL setting.) <|cite_end|> <|cite_start|> (Reference: A Minimalist Approach to Offline Reinforcement Learning: Offline reinforcement learning (RL) defines the task of learning from a fixed batch of data. Due to errors in value estimation from out-of-distribution actions, most offline RL algorithms take the approach of constraining or regularizing the policy with the actions contained in the dataset. Built on pre-existing RL algorithms, modifications to make an RL algorithm work offline comes at the cost of additional complexity. Offline RL algorithms introduce new hyperparameters and often leverage secondary components such as generative models, while adjusting the underlying RL algorithm. In this paper we aim to make a deep RL algorithm work while making minimal changes. We find that we can match the performance of state-of-the-art offline RL algorithms by simply adding a behavior cloning term to the policy update of an online RL algorithm and normalizing the data. The resulting algorithm is a simple to implement and tune baseline, while more than halving the overall run time by removing the additional computational overhead of previous methods.) <|cite_end|> and policy constraint <|cite_start|> (Reference: Stabilizing Off-Policy Q-Learning via Bootstrapping Error Reduction: Off-policy reinforcement learning aims to leverage experience collected from prior policies for sample-efficient learning. However, in practice, commonly used off-policy approximate dynamic programming methods based on Q-learning and actor-critic methods are highly sensitive to the data distribution, and can make only limited progress without collecting additional on-policy data. As a step towards more robust off-policy algorithms, we study the setting where the off-policy experience is fixed and there is no further interaction with the environment. We identify bootstrapping error as a key source of instability in current methods. Bootstrapping error is due to bootstrapping from actions that lie outside of the training data distribution, and it accumulates via the Bellman backup operator. We theoretically analyze bootstrapping error, and demonstrate how carefully constraining action selection in the backup can mitigate it. Based on our analysis, we propose a practical algorithm, bootstrapping error accumulation reduction (BEAR). We demonstrate that BEAR is able to learn robustly from different off-policy distributions, including random and suboptimal demonstrations, on a range of continuous control tasks.) <|cite_end|> <|cite_start|> (Reference: Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems: In this tutorial article, we aim to provide the reader with the conceptual tools needed to get started on research on offline reinforcement learning algorithms: reinforcement learning algorithms that utilize previously collected data, without additional online data collection. Offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines. Effective offline reinforcement learning methods would be able to extract policies with the maximum possible utility out of the available data, thereby allowing automation of a wide range of decision-making domains, from healthcare and education to robotics. However, the limitations of current algorithms make this difficult. We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep reinforcement learning methods, and describe some potential solutions that have been explored in recent work to mitigate these challenges, along with recent applications, and a discussion of perspectives on open problems in the field.) <|cite_end|> <|cite_start|> (Reference: Offline Reinforcement Learning with Implicit Q-Learning: Offline reinforcement learning requires reconciling two conflicting aims: learning a policy that improves over the behavior policy that collected the dataset, while at the same time minimizing the deviation from the behavior policy so as to avoid errors due to distributional shift. This trade-off is critical, because most current offline reinforcement learning methods need to query the value of unseen actions during training to improve the policy, and therefore need to either constrain these actions to be in-distribution, or else regularize their values. We propose an offline RL method that never needs to evaluate actions outside of the dataset, but still enables the learned policy to improve substantially over the best behavior in the data through generalization. The main insight in our work is that, instead of evaluating unseen actions from the latest policy, we can approximate the policy improvement step implicitly by treating the state value function as a random variable, with randomness determined by the action (while still integrating over the dynamics to avoid excessive optimism), and then taking a state conditional upper expectile of this random variable to estimate the value of the best actions in that state. This leverages the generalization capacity of the function approximator to estimate the value of the best available action at a given state without ever directly querying a Q-function with this unseen action. Our algorithm alternates between fitting this upper expectile value function and backing it up into a Q-function. Then, we extract the policy via advantage-weighted behavioral cloning. We dub our method implicit Q-learning (IQL). IQL demonstrates the state-of-the-art performance on D4RL, a standard benchmark for offline reinforcement learning. We also demonstrate that IQL achieves strong performance fine-tuning using online interaction after offline initialization.) <|cite_end|>.
Other methods augment BC with a weight to make the policy favor high advantage actions <|cite_start|> (Reference: Exponentially weighted imitation learning for batched historical data: We consider deep policy learning with only batched historical trajectories. The main challenge of this problem is that the learner no longer has a simulator or ``environment oracle'' as in most reinforcement learning settings. To solve this problem, we propose a monotonic advantage reweighted imitation learning strategy that is applicable to problems with complex nonlinear function approximation and works well with hybrid (discrete and continuous) action space. The method does not rely on the knowledge of the behavior policy, thus can be used to learn from data generated by an unknown policy. Under mild conditions, our algorithm, though surprisingly simple, has a policy improvement bound and outperforms most competing methods empirically. Thorough numerical results are also provided to demonstrate the efficacy of the proposed methodology.) <|cite_end|> <|cite_start|> (Reference: Keep Doing What Worked: Behavioral Modelling Priors for Offline Reinforcement Learning: Off-policy reinforcement learning algorithms promise to be applicable in settings where only a fixed data-set (batch) of environment interactions is available and no new experience can be acquired. This property makes these algorithms appealing for real world problems such as robot control. In practice, however, standard off-policy algorithms fail in the batch setting for continuous control. In this paper, we propose a simple solution to this problem. It admits the use of data generated by arbitrary behavior policies and uses a learned prior -- the advantage-weighted behavior model (ABM) -- to bias the RL policy towards actions that have previously been executed and are likely to be successful on the new task. Our method can be seen as an extension of recent work on batch-RL that enables stable learning from conflicting data-sources. We find improvements on competitive baselines in a variety of RL tasks -- including standard continuous control benchmarks and multi-task learning for simulated and real-world robots.) <|cite_end|> <|cite_start|> (Reference: Advantage-Weighted Regression: Simple and Scalable Off-Policy Reinforcement Learning: In this paper, we aim to develop a simple and scalable reinforcement learning algorithm that uses standard supervised learning methods as subroutines. Our goal is an algorithm that utilizes only simple and convergent maximum likelihood loss functions, while also being able to leverage off-policy data. Our proposed approach, which we refer to as advantage-weighted regression (AWR), consists of two standard supervised learning steps: one to regress onto target values for a value function, and another to regress onto weighted target actions for the policy. The method is simple and general, can accommodate continuous and discrete actions, and can be implemented in just a few lines of code on top of standard supervised learning methods. We provide a theoretical motivation for AWR and analyze its properties when incorporating off-policy data from experience replay. We evaluate AWR on a suite of standard OpenAI Gym benchmark tasks, and show that it achieves competitive performance compared to a number of well-established state-of-the-art RL algorithms. AWR is also able to acquire more effective policies than most off-policy algorithms when learning from purely static datasets with no additional environmental interactions. Furthermore, we demonstrate our algorithm on challenging continuous control tasks with highly complex simulated characters.) <|cite_end|> <|cite_start|> (Reference: Critic Regularized Regression: Offline reinforcement learning (RL), also known as batch RL, offers the prospect of policy optimization from large pre-recorded datasets without online environment interaction. It addresses challenges with regard to the cost of data collection and safety, both of which are particularly pertinent to real-world applications of RL. Unfortunately, most off-policy algorithms perform poorly when learning from a fixed dataset. In this paper, we propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR). We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces -- outperforming several state-of-the-art offline RL algorithms by a significant margin on a wide range of benchmark tasks.) <|cite_end|>.
Some methods extra introduced Uncertainty estimation <|cite_start|> (Reference: Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble: Offline reinforcement learning (offline RL), which aims to find an optimal policy from a previously collected static dataset, bears algorithmic difficulties due to function approximation errors from out-of-distribution (OOD) data points. To this end, offline RL algorithms adopt either a constraint or a penalty term that explicitly guides the policy to stay close to the given dataset. However, prior methods typically require accurate estimation of the behavior policy or sampling from OOD data points, which themselves can be a non-trivial problem. Moreover, these methods under-utilize the generalization ability of deep neural networks and often fall into suboptimal solutions too close to the given dataset. In this work, we propose an uncertainty-based offline RL method that takes into account the confidence of the Q-value prediction and does not require any estimation or sampling of the data distribution. We show that the clipped Q-learning, a technique widely used in online RL, can be leveraged to successfully penalize OOD data points with high prediction uncertainties. Surprisingly, we find that it is possible to substantially outperform existing offline RL methods on various tasks by simply increasing the number of Q-networks along with the clipped Q-learning. Based on this observation, we propose an ensemble-diversified actor-critic algorithm that reduces the number of required ensemble networks down to a tenth compared to the naive ensemble while achieving state-of-the-art performance on most of the D4RL benchmarks considered.) <|cite_end|> <|cite_start|> (Reference: Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning: Offline Reinforcement Learning (RL) aims to learn policies from previously collected datasets without exploring the environment. Directly applying off-policy algorithms to offline RL usually fails due to the extrapolation error caused by the out-of-distribution (OOD) actions. Previous methods tackle such problem by penalizing the Q-values of OOD actions or constraining the trained policy to be close to the behavior policy. Nevertheless, such methods typically prevent the generalization of value functions beyond the offline data and also lack precise characterization of OOD data. In this paper, we propose Pessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven offline algorithm without explicit policy constraints. Specifically, PBRL conducts uncertainty quantification via the disagreement of bootstrapped Q-functions, and performs pessimistic updates by penalizing the value function based on the estimated uncertainty. To tackle the extrapolating error, we further propose a novel OOD sampling method. We show that such OOD sampling and pessimistic bootstrapping yields provable uncertainty quantifier in linear MDPs, thus providing the theoretical underpinning for PBRL. Extensive experiments on D4RL benchmark show that PBRL has better performance compared to the state-of-the-art algorithms.) <|cite_end|> or conservative <|cite_start|> (Reference: Conservative Q-Learning for Offline Reinforcement Learning: Effectively leveraging large, previously collected datasets in reinforcement learning (RL) is a key challenge for large-scale real-world applications. Offline RL algorithms promise to learn effective policies from previously-collected, static datasets without further interaction. However, in practice, offline RL presents a major challenge, and standard off-policy RL methods can fail due to overestimation of values induced by the distributional shift between the dataset and the learned policy, especially when training on complex and multi-modal data distributions. In this paper, we propose conservative Q-learning (CQL), which aims to address these limitations by learning a conservative Q-function such that the expected value of a policy under this Q-function lower-bounds its true value. We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees. In practice, CQL augments the standard Bellman error objective with a simple Q-value regularizer which is straightforward to implement on top of existing deep Q-learning and actor-critic implementations. On both discrete and continuous control domains, we show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return, especially when learning from complex and multi-modal data distributions.) <|cite_end|> <|cite_start|> (Reference: COMBO: Conservative Offline Model-Based Policy Optimization: Model-based algorithms, which learn a dynamics model from logged experience and perform some sort of pessimistic planning under the learned model, have emerged as a promising paradigm for offline reinforcement learning (offline RL). However, practical variants of such model-based algorithms rely on explicit uncertainty quantification for incorporating pessimism. Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable. We overcome this limitation by developing a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-action tuples generated via rollouts under the learned model. This results in a conservative estimate of the value function for out-of-support state-action tuples, without requiring explicit uncertainty estimation. We theoretically show that our method optimizes a lower bound on the true policy value, that this bound is tighter than that of prior methods, and our approach satisfies a policy improvement guarantee in the offline setting. Through experiments, we find that COMBO consistently performs as well or better as compared to prior offline model-free and model-based methods on widely studied offline RL benchmarks, including image-based tasks.) <|cite_end|> <|cite_start|> (Reference: AlgaeDICE: Policy Gradient from Arbitrary Experience: In many real-world applications of reinforcement learning (RL), interactions with the environment are limited due to cost or feasibility. This presents a challenge to traditional RL algorithms since the max-return objective involves an expectation over on-policy samples. We introduce a new formulation of max-return optimization that allows the problem to be re-expressed by an expectation over an arbitrary behavior-agnostic and off-policy data distribution. We first derive this result by considering a regularized version of the dual max-return objective before extending our findings to unregularized objectives through the use of a Lagrangian formulation of the linear programming characterization of Q-values. We show that, if auxiliary dual variables of the objective are optimized, then the gradient of the off-policy objective is exactly the on-policy policy gradient, without any use of importance weighting. In addition to revealing the appealing theoretical properties of this approach, we also show that it delivers good practical performance.) <|cite_end|> estimation to overcome overestimation.
\paragraph{Monotonic Policy Improvement}
Monotonic policy improvement in online RL was first introduced by <|cite_start|> (Reference: Approximately Optimal Approximate Reinforcement Learning: ) <|cite_end|>.
On this basis, two classical on-policy methods Trust Region Policy Optimization (TRPO) <|cite_start|> (Reference: Trust Region Policy Optimization: We describe an iterative procedure for optimizing policies, with guaranteed monotonic improvement. By making several approximations to the theoretically-justified procedure, we develop a practical algorithm, called Trust Region Policy Optimization (TRPO). This algorithm is similar to natural policy gradient methods and is effective for optimizing large nonlinear policies such as neural networks. Our experiments demonstrate its robust performance on a wide variety of tasks: learning simulated robotic swimming, hopping, and walking gaits; and playing Atari games using images of the screen as input. Despite its approximations that deviate from the theory, TRPO tends to give monotonic improvement, with little tuning of hyperparameters.) <|cite_end|> and Proximal Policy Optimization (PPO) <|cite_start|> (Reference: Proximal Policy Optimization Algorithms: We propose a new family of policy gradient methods for reinforcement learning, which alternate between sampling data through interaction with the environment, and optimizing a "surrogate" objective function using stochastic gradient ascent. Whereas standard policy gradient methods perform one gradient update per data sample, we propose a novel objective function that enables multiple epochs of minibatch updates. The new methods, which we call proximal policy optimization (PPO), have some of the benefits of trust region policy optimization (TRPO), but they are much simpler to implement, more general, and have better sample complexity (empirically). Our experiments test PPO on a collection of benchmark tasks, including simulated robotic locomotion and Atari game playing, and we show that PPO outperforms other online policy gradient methods, and overall strikes a favorable balance between sample complexity, simplicity, and wall-time.) <|cite_end|> were proposed.
Afterwards, monotonic policy improvement has been extended to constrained MDP <|cite_start|> (Reference: Constrained Policy Optimization: For many applications of reinforcement learning it can be more convenient to specify both a reward function and constraints, rather than trying to design behavior through the reward function. For example, systems that physically interact with or around humans should satisfy safety constraints. Recent advances in policy search algorithms (Mnih et al., 2016, Schulman et al., 2015, Lillicrap et al., 2016, Levine et al., 2016) have enabled new capabilities in high-dimensional control, but do not consider the constrained setting. We propose Constrained Policy Optimization (CPO), the first general-purpose policy search algorithm for constrained reinforcement learning with guarantees for near-constraint satisfaction at each iteration. Our method allows us to train neural network policies for high-dimensional control while making guarantees about policy behavior all throughout training. Our guarantees are based on a new theoretical result, which is of independent interest: we prove a bound relating the expected returns of two policies to an average divergence between them. We demonstrate the effectiveness of our approach on simulated robot locomotion tasks where the agent must satisfy constraints motivated by safety.) <|cite_end|>, model-based method <|cite_start|> (Reference: Algorithmic Framework for Model-based Deep Reinforcement Learning with Theoretical Guarantees: Model-based reinforcement learning (RL) is considered to be a promising approach to reduce the sample complexity that hinders model-free RL. However, the theoretical understanding of such methods has been rather limited. This paper introduces a novel algorithmic framework for designing and analyzing model-based RL algorithms with theoretical guarantees. We design a meta-algorithm with a theoretical guarantee of monotone improvement to a local maximum of the expected reward. The meta-algorithm iteratively builds a lower bound of the expected reward based on the estimated dynamical model and sample trajectories, and then maximizes the lower bound jointly over the policy and the model. The framework extends the optimism-in-face-of-uncertainty principle to non-linear dynamical models in a way that requires \textit{no explicit} uncertainty quantification. Instantiating our framework with simplification gives a variant of model-based RL algorithms Stochastic Lower Bounds Optimization (SLBO). Experiments demonstrate that SLBO achieves state-of-the-art performance when only one million or fewer samples are permitted on a range of continuous control benchmark tasks.) <|cite_end|> and off-policy RL <|cite_start|> (Reference: Generalized Proximal Policy Optimization with Sample Reuse: In real-world decision making tasks, it is critical for data-driven reinforcement learning methods to be both stable and sample efficient. On-policy methods typically generate reliable policy improvement throughout training, while off-policy methods make more efficient use of data through sample reuse. In this work, we combine the theoretically supported stability benefits of on-policy algorithms with the sample efficiency of off-policy algorithms. We develop policy improvement guarantees that are suitable for the off-policy setting, and connect these bounds to the clipping mechanism used in Proximal Policy Optimization. This motivates an off-policy version of the popular algorithm that we call Generalized Proximal Policy Optimization with Sample Reuse. We demonstrate both theoretically and empirically that our algorithm delivers improved performance by effectively balancing the competing goals of stability and sample efficiency.) <|cite_end|> <|cite_start|> (Reference: An off-policy trust region policy optimization method with monotonic improvement guarantee for deep reinforcement learning: In deep reinforcement learning, off-policy data help reduce on-policy interaction with the environment, and the trust region policy optimization (TRPO) method is efficient to stabilize the policy optimization procedure. In this article, we propose an off-policy TRPO method, off-policy TRPO, which exploits both on- and off-policy data and guarantees the monotonic improvement of policies. A surrogate objective function is developed to use both on- and off-policy data and keep the monotonic improvement of policies. We then optimize this surrogate objective function by approximately solving a constrained optimization problem under arbitrary parameterization and finite samples. We conduct experiments on representative continuous control tasks from OpenAI Gym and MuJoCo. The results show that the proposed off-policy TRPO achieves better performance in the majority of continuous control tasks compared with other trust region policy-based methods using off-policy data.) <|cite_end|>.
The main idea behind BPPO is to regularize each policy update by restricting the divergence.
Such regularization is often used in unsupervised skill learning <|cite_start|> (Reference: Unsupervised Domain Adaptation with Dynamics-Aware Rewards in Reinforcement Learning: Unsupervised reinforcement learning aims to acquire skills without prior goal representations, where an agent automatically explores an open-ended environment to represent goals and learn the goal-conditioned policy. However, this procedure is often time-consuming, limiting the rollout in some potentially expensive target environments. The intuitive approach of training in another interaction-rich environment disrupts the reproducibility of trained skills in the target environment due to the dynamics shifts and thus inhibits direct transferring. Assuming free access to a source environment, we propose an unsupervised domain adaptation method to identify and acquire skills across dynamics. Particularly, we introduce a KL regularized objective to encourage emergence of skills, rewarding the agent for both discovering skills and aligning its behaviors respecting dynamics shifts. This suggests that both dynamics (source and target) shape the reward to facilitate the learning of adaptive skills. We also conduct empirical experiments to demonstrate that our method can effectively learn skills that can be smoothly deployed in target.) <|cite_end|> <|cite_start|> (Reference: Learn Goal-Conditioned Policy with Intrinsic Motivation for Deep Reinforcement Learning: It is of significance for an agent to learn a widely applicable and general-purpose policy that can achieve diverse goals including images and text descriptions. Considering such perceptually-specific goals, the frontier of deep reinforcement learning research is to learn a goal-conditioned policy without hand-crafted rewards. To learn this kind of policy, recent works usually take as the reward the non-parametric distance to a given goal in an explicit embedding space. From a different viewpoint, we propose a novel unsupervised learning approach named goal-conditioned policy with intrinsic motivation (GPIM), which jointly learns both an abstract-level policy and a goal-conditioned policy. The abstract-level policy is conditioned on a latent variable to optimize a discriminator and discovers diverse states that are further rendered into perceptually-specific goals for the goal-conditioned policy. The learned discriminator serves as an intrinsic reward function for the goal-conditioned policy to imitate the trajectory induced by the abstract-level policy. Experiments on various robotic tasks demonstrate the effectiveness and efficiency of our proposed GPIM method which substantially outperforms prior techniques.) <|cite_end|> <|cite_start|> (Reference: Independent skill transfer for deep reinforcement learning: Recently, diverse primitive skills have been learned by adopting the entropy as intrinsic reward, which further shows that new practical skills can be produced by combining a variety of primitive skills. This is essentially skill transfer, very useful for learning high-level skills but quite challenging due to the low efficiency of transferring primitive skills. In this paper, we propose a novel efficient skill transfer method, where we learn independent skills and only independent components of skills are transferred instead of the whole set of skills. More concretely, independent components of skills are obtained through independent component analysis (ICA), which always have a smaller amount (or lower dimension) compared with their mixtures. With a lower dimension, independent skill transfer (IST) exhibits a higher efficiency on learning a given task. Extensive experiments including three robotic tasks demonstrate the effectiveness and high efficiency of our proposed IST method in comparison to direct primitive-skill transfer and conventional reinforcement learning.) <|cite_end|> and imitation learning <|cite_start|> (Reference: Wasserstein Adversarial Imitation Learning: Imitation Learning describes the problem of recovering an expert policy from demonstrations. While inverse reinforcement learning approaches are known to be very sample-efficient in terms of expert demonstrations, they usually require problem-dependent reward functions or a (task-)specific reward-function regularization. In this paper, we show a natural connection between inverse reinforcement learning approaches and Optimal Transport, that enables more general reward functions with desirable properties (e.g., smoothness). Based on our observation, we propose a novel approach called Wasserstein Adversarial Imitation Learning. Our approach considers the Kantorovich potentials as a reward function and further leverages regularized optimal transport to enable large-scale applications. In several robotic experiments, our approach outperforms the baselines in terms of average cumulative rewards and shows a significant improvement in sample-efficiency, by requiring just one expert demonstration.) <|cite_end|> <|cite_start|> (Reference: Off-Dynamics Inverse Reinforcement Learning from Hetero-Domain: We propose an approach for inverse reinforcement learning from hetero-domain which learns a reward function in the simulator, drawing on the demonstrations from the real world. The intuition behind the method is that the reward function should not only be oriented to imitate the experts, but should encourage actions adjusted for the dynamics difference between the simulator and the real world. To achieve this, the widely used GAN-inspired IRL method is adopted, and its discriminator, recognizing policy-generating trajectories, is modified with the quantification of dynamics difference. The training process of the discriminator can yield the transferable reward function suitable for simulator dynamics, which can be guaranteed by derivation. Effectively, our method assigns higher rewards for demonstration trajectories which do not exploit discrepancies between the two domains. With extensive experiments on continuous control tasks, our method shows its effectiveness and demonstrates its scalability to high-dimensional tasks.) <|cite_end|>. <|cite_start|> (Reference: Offline Reinforcement Learning with Soft Behavior Regularization: Most prior approaches to offline reinforcement learning (RL) utilize \textit{behavior regularization}, typically augmenting existing off-policy actor critic algorithms with a penalty measuring divergence between the policy and the offline data. However, these approaches lack guaranteed performance improvement over the behavior policy. In this work, we start from the performance difference between the learned policy and the behavior policy, we derive a new policy learning objective that can be used in the offline setting, which corresponds to the advantage function value of the behavior policy, multiplying by a state-marginal density ratio. We propose a practical way to compute the density ratio and demonstrate its equivalence to a state-dependent behavior regularization. Unlike state-independent regularization used in prior approaches, this \textit{soft} regularization allows more freedom of policy deviation at high confidence states, leading to better performance and stability. We thus term our resulting algorithm Soft Behavior-regularized Actor Critic (SBAC). Our experimental results show that SBAC matches or outperforms the state-of-the-art on a set of continuous control locomotion and manipulation tasks.) <|cite_end|> mentions that offline algorithms lack guaranteed performance improvement over the behavior policy but we are the first to introduce monotonic policy improvement to solve offline RL. <|paper_end|> | [
"<|reference_start|> Pessimistic Bootstrapping for Uncertainty-Driven Offline Reinforcement Learning: Offline Reinforcement Learning (RL) aims to learn policies from previously collected datasets without exploring the environment. Directly applying off-policy algorithms to offline RL usually fails due to the extrapolation error caused by the out-of-distribution (OOD) actions. Previous methods tackle such problem by penalizing the Q-values of OOD actions or constraining the trained policy to be close to the behavior policy. Nevertheless, such methods typically prevent the generalization of value functions beyond the offline data and also lack precise characterization of OOD data. In this paper, we propose Pessimistic Bootstrapping for offline RL (PBRL), a purely uncertainty-driven offline algorithm without explicit policy constraints. Specifically, PBRL conducts uncertainty quantification via the disagreement of bootstrapped Q-functions, and performs pessimistic updates by penalizing the value function based on the estimated uncertainty. To tackle the extrapolating error, we further propose a novel OOD sampling method. We show that such OOD sampling and pessimistic bootstrapping yields provable uncertainty quantifier in linear MDPs, thus providing the theoretical underpinning for PBRL. Extensive experiments on D4RL benchmark show that PBRL has better performance compared to the state-of-the-art algorithms. <|reference_end|>",
"<|reference_start|> Approximately Optimal Approximate Reinforcement Learning: <|reference_end|>",
"<|reference_start|> Unsupervised Domain Adaptation with Dynamics-Aware Rewards in Reinforcement Learning: Unsupervised reinforcement learning aims to acquire skills without prior goal representations, where an agent automatically explores an open-ended environment to represent goals and learn the goal-conditioned policy. However, this procedure is often time-consuming, limiting the rollout in some potentially expensive target environments. The intuitive approach of training in another interaction-rich environment disrupts the reproducibility of trained skills in the target environment due to the dynamics shifts and thus inhibits direct transferring. Assuming free access to a source environment, we propose an unsupervised domain adaptation method to identify and acquire skills across dynamics. Particularly, we introduce a KL regularized objective to encourage emergence of skills, rewarding the agent for both discovering skills and aligning its behaviors respecting dynamics shifts. This suggests that both dynamics (source and target) shape the reward to facilitate the learning of adaptive skills. We also conduct empirical experiments to demonstrate that our method can effectively learn skills that can be smoothly deployed in target. <|reference_end|>",
"<|reference_start|> Independent skill transfer for deep reinforcement learning: Recently, diverse primitive skills have been learned by adopting the entropy as intrinsic reward, which further shows that new practical skills can be produced by combining a variety of primitive skills. This is essentially skill transfer, very useful for learning high-level skills but quite challenging due to the low efficiency of transferring primitive skills. In this paper, we propose a novel efficient skill transfer method, where we learn independent skills and only independent components of skills are transferred instead of the whole set of skills. More concretely, independent components of skills are obtained through independent component analysis (ICA), which always have a smaller amount (or lower dimension) compared with their mixtures. With a lower dimension, independent skill transfer (IST) exhibits a higher efficiency on learning a given task. Extensive experiments including three robotic tasks demonstrate the effectiveness and high efficiency of our proposed IST method in comparison to direct primitive-skill transfer and conventional reinforcement learning. <|reference_end|>"
] | [
33,
37,
44,
46
] | {"<|cite_1|>": "ss-965032", "<|cite_2|>": "arxiv-153424", "<|cite_3|>": "arxiv-220059", "<|multi_cite_4_1|>": "ss-1289096", "<|multi_cite_4_2|>": "arxiv-259548", "<|cite_5|>": "arxiv-263399", "<|cite_6|>": "arxiv-183660", "<|multi_cite_7_1|>": "arxiv-183660", "<|multi_cite_7_2|>": "arxiv-236277", "<|multi_cite_8_1|>": "arxiv-279195", "<|multi_cite_8_2|>": "arxiv-377057", "<|multi_cite_9_1|>": "arxiv-153424", "<|multi_cite_9_2|>": "arxiv-220059", "<|cite_10|>": "ss-847310", "<|cite_11|>": "arxiv-347907", "<|cite_12|>": "arxiv-149723", "<|cite_13|>": "ss-946865", "<|cite_14|>": "arxiv-259548", "<|cite_15|>": "arxiv-183660", "<|cite_16|>": "arxiv-263399", "<|cite_17|>": "arxiv-183660", "<|multi_cite_18_1|>": "arxiv-212264", "<|multi_cite_18_2|>": "arxiv-405230", "<|multi_cite_19_1|>": "arxiv-236277", "<|multi_cite_19_2|>": "arxiv-347907", "<|multi_cite_20_1|>": "arxiv-207689", "<|multi_cite_20_2|>": "arxiv-263399", "<|multi_cite_20_3|>": "arxiv-373552", "<|multi_cite_21_1|>": "ss-681884", "<|multi_cite_21_2|>": "arxiv-249269", "<|multi_cite_21_3|>": "arxiv-226496", "<|multi_cite_21_4|>": "arxiv-274758", "<|multi_cite_22_1|>": "arxiv-371419", "<|multi_cite_22_2|>": "arxiv-401121", "<|multi_cite_23_1|>": "arxiv-270338", "<|multi_cite_23_2|>": "arxiv-321645", "<|multi_cite_23_3|>": "arxiv-237770", "<|cite_31|>": "ss-847310", "<|cite_24|>": "arxiv-73321", "<|cite_25|>": "arxiv-129813", "<|cite_26|>": "arxiv-125497", "<|cite_27|>": "arxiv-165472", "<|multi_cite_28_1|>": "arxiv-377940", "<|multi_cite_28_2|>": "ss-1366278", "<|multi_cite_29_1|>": "arxiv-376650", "<|multi_cite_29_2|>": "arxiv-333611", "<|multi_cite_29_3|>": "ss-945707", "<|multi_cite_30_1|>": "arxiv-210513", "<|multi_cite_30_2|>": "arxiv-375966", "<|cite_32|>": "arxiv-374106"} |
2001.04609 | <|paper_start|> Title: Spatial-Spectral Residual Network for Hyperspectral Image Super-Resolution
Abstract: Spatial-Spectral Residual Network for Hyperspectral Image Super-Resolution: Deep learning-based hyperspectral image super-resolution (SR) methods have achieved great success recently. However, most existing models can not effectively explore spatial information and spectral information between bands simultaneously, obtaining relatively low performance. To address this issue, in this paper, we propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet). Our method can effectively explore spatial-spectral information by using 3D convolution instead of 2D convolution, which enables the network to better extract potential information. Furthermore, we design a spectral-spatial residual module (SSRM) to adaptively learn more effective features from all the hierarchical features in units through local feature fusion, significantly improving the performance of the algorithm. In each unit, we employ spatial and temporal separable 3D convolution to extract spatial and spectral information, which not only reduces unaffordable memory usage and high computational cost, but also makes the network easier to train. Extensive evaluations and comparisons on three benchmark datasets demonstrate that the proposed approach achieves superior performance in comparison to existing state-of-the-art methods.
Introduction
\IEEEPARstart{H}{yperspectal} imaging system collects surface information in tens to hundreds of continuous spectral bands to acquire hyperspectral image. Compared with multispectral image or natural image, hyperspectral image has more abundant spectral information of ground objects, which can reflect the subtle spectral properties of the measured objects in detail <|cite_start|> (Reference: An efficient clustering method for hyperspectral optimal band selection via shared nearest neighbor: A hyperspectral image (HSI) has many bands, which leads to high correlation between adjacent bands, so it is necessary to find representative subsets before further analysis. To address this issue, band selection is considered as an effective approach that removes redundant bands for HSI. Recently, many band selection methods have been proposed, but the majority of them have extremely poor accuracy in a small number of bands and require multiple iterations, which does not meet the purpose of band selection. Therefore, we propose an efficient clustering method based on shared nearest neighbor (SNNC) for hyperspectral optimal band selection, claiming the following contributions: (1) the local density of each band is obtained by shared nearest neighbor, which can more accurately reflect the local distribution characteristics; (2) in order to acquire a band subset containing a large amount of information, the information entropy is taken as one of the weight factors; (3) a method for automatically selecting the optimal band subset is designed by the slope change. The experimental results reveal that compared with other methods, the proposed method has competitive computational time and the selected bands achieve higher overall classification accuracy on different data sets, especially when the number of bands is small.) <|cite_end|>. As a result, it is widely used in various fields, such as mineral exploration <|cite_start|> (Reference: Remote sensing for mineral exploration: ) <|cite_end|>, medical diagnosis <|cite_start|> (Reference: Dual‐modality endoscopic probe for tissue surface shape reconstruction and hyperspectral imaging enabled by deep neural networks: ) <|cite_end|>, plant detection <|cite_start|> (Reference: Hyperspectral image analysis techniques for the detection and classification of the early onset of plant disease and stress: ) <|cite_end|>, etc. However, the obtained hyperspectral image is often low-resolution because of the interference of environment and other factors, which limits the performance of high-level tasks, including change detection, image classification <|cite_start|> (Reference: Locality and Structure Regularized Low Rank Representation for Hyperspectral Image Classification: Hyperspectral image (HSI) classification, which aims to assign an accurate label for hyperspectral pixels, has drawn great interest in recent years. Although low rank representation (LRR) has been used to classify HSI, its ability to segment each class from the whole HSI data has not been exploited fully yet. LRR has a good capacity to capture the underlying lowdimensional subspaces embedded in original data. However, there are still two drawbacks for LRR. First, LRR does not consider the local geometric structure within data, which makes the local correlation among neighboring data easily ignored. Second, the representation obtained by solving LRR is not discriminative enough to separate different data. In this paper, a novel locality and structure regularized low rank representation (LSLRR) model is proposed for HSI classification. To overcome the above limitations, we present locality constraint criterion (LCC) and structure preserving strategy (SPS) to improve the classical LRR. Specifically, we introduce a new distance metric, which combines both spatial and spectral features, to explore the local similarity of pixels. Thus, the global and local structures of HSI data can be exploited sufficiently. Besides, we propose a structure constraint to make the representation have a near block-diagonal structure. This helps to determine the final classification labels directly. Extensive experiments have been conducted on three popular HSI datasets. And the experimental results demonstrate that the proposed LSLRR outperforms other state-of-the-art methods.) <|cite_end|>, etc.
\begin{figure}[t]
\centering
\includegraphics[height=6cm,width=0.48\textwidth]{visual_1.pdf}
\caption{Comparisons of our SSRNet with existing methods on hyperspectral image SR for scale factor $\times$4. The absolute error map of one band is showed between reconstructed hyperspectral image and ground-truth. In general, the bluer the absolute error map is, the better the restored image is. }
\label{fig:fig1}
\end{figure}
To better and accurately describe the ground objects, the hyperspectral image super-resolution (SR) is proposed <|cite_start|> (Reference: Hyperspectral Image Super-Resolution Using Deep Feature Matrix Factorization: Hyperspectral images (HSIs) can describe the subtle differences in the spectral signatures of materials. However, they have low spatial resolution due to various hardware limitations. Improving it via postprocess without an auxiliary high-resolution (HR) image still remains a challenging problem. In this paper, we address this problem and propose a new HSI super-resolution (SR) method. Our approach, called deep feature matrix factorization (DFMF), blends feature matrix extracted by a deep neural network (DNN) with nonnegative matrix factorization strategy for super-resolving real-scene HSI. The estimation of the HR HSI is formulated as a combination of latent spatial feature matrix and spectral feature matrix. In the DFMF model, the input low-resolution (LR) HSI is first partitioned into several subsets according to the correlation matrix, and the key band is selected from each subset. Then, the key band group is super-resolved by a DNN model, and the HR key band group is then used as a guide to carry out deep spatial feature matrix. Specifically, the input LR HSI with prototype reflectance spectral vectors of the scene will be preserved when super-resolving in a spatial domain. Thus, the nonnegative spectral and spatial feature matrices are extracted simultaneously from alternately factorizing the pair of LR HSI and the HR key band group. Finally, the HR HSI is obtained by the integration of the spectral and spatial feature matrices. Experiments have been conducted on real-scene remote sensing HSI. Comparative analyses validate that the proposed DFMF method presents a superior super-resolving performance, as it preserves spectral information better.) <|cite_end|> <|cite_start|> (Reference: Hyperspectral image super-resolution via non-negative structured sparse representation: Hyperspectral imaging has many applications from agriculture and astronomy to surveillance and mineralogy. However, it is often challenging to obtain high-resolution (HR) hyperspectral images using existing hyperspectral imaging techniques due to various hardware limitations. In this paper, we propose a new hyperspectral image super-resolution method from a low-resolution (LR) image and a HR reference image of the same scene. The estimation of the HR hyperspectral image is formulated as a joint estimation of the hyperspectral dictionary and the sparse codes based on the prior knowledge of the spatial-spectral sparsity of the hyperspectral image. The hyperspectral dictionary representing prototype reflectance spectra vectors of the scene is first learned from the input LR image. Specifically, an efficient non-negative dictionary learning algorithm using the block-coordinate descent optimization technique is proposed. Then, the sparse codes of the desired HR hyperspectral image with respect to learned hyperspectral basis are estimated from the pair of LR and HR reference images. To improve the accuracy of non-negative sparse coding, a clustering-based structured sparse coding method is proposed to exploit the spatial correlation among the learned sparse codes. The experimental results on both public datasets and real LR hypspectral images suggest that the proposed method substantially outperforms several existing HR hyperspectral image recovery techniques in the literature in terms of both objective quality metrics and computational efficiency.) <|cite_end|> <|cite_start|> (Reference: Super-resolution reconstruction of hyperspectral images: Hyperspectral imagery is used for a wide variety of applications, including target detection, tacking, agricultural monitoring and natural resources exploration. The main reason for using hyperspectral imagery is that these images reveal spectral information about the scene that are not available in a single band. Unfortunately, many factors such as sensor noise and atmospheric scattering degrade the spatial quality of these images. Recently, many algorithms are introduced in the literature to improve the resolution of hyperspectral images [7]. In this paper, we propose a new method to produce high resolution bands from low resolution bands that are strongly correlated to the corresponding high resolution panchromatic image. The proposed method is based on using the local correlation instead of using the global correlation to improve the estimated interpolation in order to construct the high resolution image. The utilization of local correlation significantly improved the resolution of high resolution images when compared to the corresponding results obtained using the traditional algorithms. The local correlation is implemented by using predefined small windows across the low resolution image. In addition, numerous experiments are conducted to investigate the effect of the chosen window size in the image quality. Experiments results obtained using real life hyperspectral imagery is presented to verify the effectiveness of the proposed algorithm.) <|cite_end|>. It aims to restore high-resolution hyperspectral image from degraded low-resolution hyperspectral image. In practical application, the objects in the image are often detected or recognized according to the spectral reflectance of the object. Therefore, spectral and spatial resolution should be considered simultaneously for hyperspectral image SR, which is different from natural image SR in computer vision <|cite_start|> (Reference: Channel-wise and Spatial Feature Modulation Network for Single Image Super-Resolution: The performance of single image super-resolution has achieved significant improvement by utilizing deep convolutional neural networks (CNNs). The features in deep CNN contain different types of information which make different contributions to image reconstruction. However, most CNN-based models lack discriminative ability for different types of information and deal with them equally, which results in the representational capacity of the models being limited. On the other hand, as the depth of neural networks grows, the long-term information coming from preceding layers is easy to be weaken or lost in late layers, which is adverse to super-resolving image. To capture more informative features and maintain long-term information for image super-resolution, we propose a channel-wise and spatial feature modulation (CSFM) network in which a sequence of feature-modulation memory (FMM) modules is cascaded with a densely connected structure to transform low-resolution features to high informative features. In each FMM module, we construct a set of channel-wise and spatial attention residual (CSAR) blocks and stack them in a chain structure to dynamically modulate multi-level features in a global-and-local manner. This feature modulation strategy enables the high contribution information to be enhanced and the redundant information to be suppressed. Meanwhile, for long-term information persistence, a gated fusion (GF) node is attached at the end of the FMM module to adaptively fuse hierarchical features and distill more effective information via the dense skip connections and the gating mechanism. Extensive quantitative and qualitative evaluations on benchmark datasets illustrate the superiority of our proposed method over the state-of-the-art methods.) <|cite_end|>.
Since the spatial resolution of hyperspectral image is lower than that of RGB image <|cite_start|> (Reference: Unsupervised Sparse Dirichlet-Net for Hyperspectral Image Super-Resolution: In many computer vision applications, obtaining images of high resolution in both the spatial and spectral domains are equally important. However, due to hardware limitations, one can only expect to acquire images of high resolution in either the spatial or spectral domains. This paper focuses on hyperspectral image super-resolution (HSI-SR), where a hyperspectral image (HSI) with low spatial resolution (LR) but high spectral resolution is fused with a multispectral image (MSI) with high spatial resolution (HR) but low spectral resolution to obtain HR HSI. Existing deep learning-based solutions are all supervised that would need a large training set and the availability of HR HSI, which is unrealistic. Here, we make the first attempt to solving the HSI-SR problem using an unsupervised encoder-decoder architecture that carries the following uniquenesses. First, it is composed of two encoder-decoder networks, coupled through a shared decoder, in order to preserve the rich spectral information from the HSI network. Second, the network encourages the representations from both modalities to follow a sparse Dirichlet distribution which naturally incorporates the two physical constraints of HSI and MSI. Third, the angular difference between representations are minimized in order to reduce the spectral distortion. We refer to the proposed architecture as unsupervised Sparse Dirichlet-Net, or uSDN. Extensive experimental results demonstrate the superior performance of uSDN as compared to the state-of-the-art.) <|cite_end|>, existing methods mainly fuse high-resolution RGB image with low-resolution hyperspectral image <|cite_start|> (Reference: RGB-guided hyperspectral image upsampling: Hyperspectral imaging usually lack of spatial resolution due to limitations of hardware design of imaging sensors. On the contrary, latest imaging sensors capture a RGB image with resolution of multiple times larger than a hyperspectral image. In this paper, we present an algorithm to enhance and upsample the resolution of hyperspectral images. Our algorithm consists of two stages: spatial upsampling stage and spectrum substitution stage. The spatial upsampling stage is guided by a high resolution RGB image of the same scene, and the spectrum substitution stage utilizes sparse coding to locally refine the upsampled hyperspectral image through dictionary substitution. Experiments show that our algorithm is highly effective and has outperformed state-of-the-art matrix factorization based approaches.) <|cite_end|> <|cite_start|> (Reference: Hierarchical Beta Process with Gaussian Process Prior for Hyperspectral Image Super Resolution: ) <|cite_end|> <|cite_start|> (Reference: Bayesian Sparse Representation for Hyperspectral Image Super Resolution: Despite the proven efficacy of hyperspectral imaging in many computer vision tasks, its widespread use is hindered by its low spatial resolution, resulting from hardware limitations. We propose a hyperspectral image super resolution approach that fuses a high resolution image with the low resolution hyperspectral image using non-parametric Bayesian sparse representation. The proposed approach first infers probability distributions for the material spectra in the scene and their proportions. The distributions are then used to compute sparse codes of the high resolution image. To that end, we propose a generic Bayesian sparse coding strategy to be used with Bayesian dictionaries learned with the Beta process. We theoretically analyze the proposed strategy for its accurate performance. The computed codes are used with the estimated scene spectra to construct the super resolution hyperspectral image. Exhaustive experiments on two public databases of ground based hyperspectral images and a remotely sensed image show that the proposed approach outperforms the existing state of the art.) <|cite_end|>. For instance, Kwon \textit{et al.} <|cite_start|> (Reference: RGB-guided hyperspectral image upsampling: Hyperspectral imaging usually lack of spatial resolution due to limitations of hardware design of imaging sensors. On the contrary, latest imaging sensors capture a RGB image with resolution of multiple times larger than a hyperspectral image. In this paper, we present an algorithm to enhance and upsample the resolution of hyperspectral images. Our algorithm consists of two stages: spatial upsampling stage and spectrum substitution stage. The spatial upsampling stage is guided by a high resolution RGB image of the same scene, and the spectrum substitution stage utilizes sparse coding to locally refine the upsampled hyperspectral image through dictionary substitution. Experiments show that our algorithm is highly effective and has outperformed state-of-the-art matrix factorization based approaches.) <|cite_end|> utilize the RGB image corresponding to high-resolution hyperspectral image to obtain poorly reconstructed image. Then the image in local is refined by sparse coding to obtain better SR image. Under the prior knowledge on spectral and spatial transform responses, Wycoff \textit{et al.} <|cite_start|> (Reference: A non-negative sparse promoting algorithm for high resolution hyperspectral imaging: Promoting the spatial resolution of off-the-shelf hyperspectral sensors is expected to improve typical computer vision tasks, such as target tracking and image classification. In this paper, we investigate the scenario in which two cameras, one with a conventional RGB sensor and the other with a hyperspectral sensor, capture the same scene, attempting to extract redundant and complementary information. We propose a non-negative sparse promoting framework to integrate the hyperspectral and RGB data into a high resolution hyperspectral set of data. The formulated problem is in the form of a sparse non-negative matrix factorization with prior knowledge on the spectral and spatial transform responses, and it can be handled by alternating optimization where each subproblem is solved by efficient convex optimization solvers; e.g., the alternating direction method of multipliers. Experiments on a public database show that our method achieves much lower average reconstruction errors than other state-of-the-art methods.) <|cite_end|> formulate the SR problem into non-negtive sparse factorization. The problem is effectively addressed by alternating direction method of multipliers <|cite_start|> (Reference: Distributed Optimization and Statistical Learning via the Alternating
Direction Method of Multipliers: Many problems of recent interest in statistics and machine learning can be posed in the framework of convex optimization. Due to the explosion in size and complexity of modern datasets, it is increasingly important to be able to solve problems with a very large number of features or training examples. As a result, both the decentralized collection or storage of these datasets as well as accompanying distributed solution methods are either necessary or at least highly desirable. In this review, we argue that the alternating direction method of multipliers is well suited to distributed convex optimization, and in particular to large-scale problems arising in statistics, machine learning, and related areas. The method was developed in the 1970s, with roots in the 1950s, and is equivalent or closely related to many other algorithms, such as dual decomposition, the method of multipliers, Douglas–Rachford splitting, Spingarn's method of partial inverses, Dykstra's alternating projections, Bregman iterative algorithms for l1 problems, proximal methods, and others. After briefly surveying the theory and history of the algorithm, we discuss applications to a wide variety of statistical and machine learning problems of recent interest, including the lasso, sparse logistic regression, basis pursuit, covariance selection, support vector machines, and many others. We also discuss general distributed optimization, extensions to the nonconvex setting, and efficient implementation, including some details on distributed MPI and Hadoop MapReduce implementations.) <|cite_end|>. These methods realize hyperspectral image SR under the guidance of RGB images generated by the same camera spectral response (CSR)\footnote{http://www.maxmax.com/aXRayIRCameras.htm}, ignoring the differences of CSR between datasets or scenes. Suppose that the same CSR value is used in the process of reconstruction, which will obviously lead to the poor robustness of the algorithm. To address this issue, Fu \textit{et al.} <|cite_start|> (Reference: Hyperspectral image super-resolution with optimized rgb guidance: To overcome the limitations of existing hyperspectral cameras on spatial/temporal resolution, fusing a low resolution hyperspectral image (HSI) with a high resolution RGB (or multispectral) image into a high resolution HSI has been prevalent. Previous methods for this fusion task usually employ hand-crafted priors to model the underlying structure of the latent high resolution HSI, and the effect of the camera spectral response (CSR) of the RGB camera on super-resolution accuracy has rarely been investigated. In this paper, we first present a simple and efficient convolutional neural network (CNN) based method for HSI super-resolution in an unsupervised way, without any prior training. Later, we append a CSR optimization layer onto the HSI super-resolution network, either to automatically select the best CSR in a given CSR dataset, or to design the optimal CSR under some physical restrictions. Experimental results show our method outperforms the state-of-the-arts, and the CSR optimization can further boost the accuracy of HSI super-resolution.) <|cite_end|> design the CSR function selection layer, which can automatically select the optimal CSR according to a particular scene. In addition to the CSR function selection mechanism, the method simulates CSR as the convolutional layer to learn the optimal CSR function, significantly improving the performance of hyperspectral image SR. However, such a scheme requires the pair of images to be well registered, which is usually difficult to follow in practice. Moreover, the scholars claim that these algorithms are unsupervised, but they are not actually unsupervised in that the ground-truth for RGB image is adopted during reconstruction.
The research of natural image SR has achieved great success in recent years due to the powerful representational ability of convolution neural networks (CNNs) <|cite_start|> (Reference: A Deep Journey into Super-resolution: A survey: Deep convolutional networks based super-resolution is a fast-growing field with numerous practical applications. In this exposition, we extensively compare 30+ state-of-the-art super-resolution Convolutional Neural Networks (CNNs) over three classical and three recently introduced challenging datasets to benchmark single image super-resolution. We introduce a taxonomy for deep-learning based super-resolution networks that groups existing methods into nine categories including linear, residual, multi-branch, recursive, progressive, attention-based and adversarial designs. We also provide comparisons between the models in terms of network complexity, memory footprint, model input and output, learning details, the type of network losses and important architectural differences (e.g., depth, skip-connections, filters). The extensive evaluation performed, shows the consistent and rapid growth in the accuracy in the past few years along with a corresponding boost in model complexity and the availability of large-scale datasets. It is also observed that the pioneering methods identified as the benchmark have been significantly outperformed by the current contenders. Despite the progress in recent years, we identify several shortcomings of existing techniques and provide future research directions towards the solution of these open problems.) <|cite_end|> <|cite_start|> (Reference: Residual Dense Network for Image Super-Resolution: A very deep convolutional neural network (CNN) has recently achieved great success for image super-resolution (SR) and offered hierarchical features as well. However, most deep CNN based SR models do not make full use of the hierarchical features from the original low-resolution (LR) images, thereby achieving relatively-low performance. In this paper, we propose a novel residual dense network (RDN) to address this problem in image SR. We fully exploit the hierarchical features from all the convolutional layers. Specifically, we propose residual dense block (RDB) to extract abundant local features via dense connected convolutional layers. RDB further allows direct connections from the state of preceding RDB to all the layers of current RDB, leading to a contiguous memory (CM) mechanism. Local feature fusion in RDB is then used to adaptively learn more effective features from preceding and current local features and stabilizes the training of wider network. After fully obtaining dense local features, we use global feature fusion to jointly and adaptively learn global hierarchical features in a holistic way. Extensive experiments on benchmark datasets with different degradation models show that our RDN achieves favorable performance against state-of-the-art methods.) <|cite_end|>. Its main principle is to learn the mapping function between low-resolution and high-resolution images in a supervised way. The typical methods include SRCNN <|cite_start|> (Reference: Learning a Deep Convolutional Network for Image Super-Resolution: ) <|cite_end|>, EDSR <|cite_start|> (Reference: Enhanced Deep Residual Networks for Single Image Super-Resolution: Recent research on super-resolution has progressed with the development of deep convolutional neural networks (DCNN). In particular, residual learning techniques exhibit improved performance. In this paper, we develop an enhanced deep super-resolution network (EDSR) with performance exceeding those of current state-of-the-art SR methods. The significant performance improvement of our model is due to optimization by removing unnecessary modules in conventional residual networks. The performance is further improved by expanding the model size while we stabilize the training procedure. We also propose a new multi-scale deep super-resolution system (MDSR) and training method, which can reconstruct high-resolution images of different upscaling factors in a single model. The proposed methods show superior performance over the state-of-the-art methods on benchmark datasets and prove its excellence by winning the NTIRE2017 Super-Resolution Challenge.) <|cite_end|>, and SRGAN <|cite_start|> (Reference: Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network: Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors? The behavior of optimization-based super-resolution methods is principally driven by the choice of the objective function. Recent work has largely focused on minimizing the mean squared reconstruction error. The resulting estimates have high peak signal-to-noise ratios, but they are often lacking high-frequency details and are perceptually unsatisfying in the sense that they fail to match the fidelity expected at the higher resolution. In this paper, we present SRGAN, a generative adversarial network (GAN) for image super-resolution (SR). To our knowledge, it is the first framework capable of inferring photo-realistic natural images for 4x upscaling factors. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss. The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images. In addition, we use a content loss motivated by perceptual similarity instead of similarity in pixel space. Our deep residual network is able to recover photo-realistic textures from heavily downsampled images on public benchmarks. An extensive mean-opinion-score (MOS) test shows hugely significant gains in perceptual quality using SRGAN. The MOS scores obtained with SRGAN are closer to those of the original high-resolution images than to those obtained with any state-of-the-art method.) <|cite_end|>, etc. Due to the satisfying performance in natural image SR, the scholars apply these methods for hyperspectral image SR. Inspired by deep recursive residual network <|cite_start|> (Reference: Image Super-resolution via Deep Recursive Residual Network: Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.) <|cite_end|>, Li \textit{et al.} <|cite_start|> (Reference: Single Hyperspectral Image Super-Resolution with Grouped Deep Recursive Residual Network: Fusing a low spatial resolution hyperspectral images (HSIs) with an high spatial resolution conventional (e.g., RGB) image has underpinned much of recent progress in HSIs super-resolution. However, such a scheme requires this pair of images to be well registered, which is often difficult to be complied with in real applications. To address this problem, we present a novel single HSI super-resolution method, termed Grouped Deep Recursive Residual Network (GDRRN), which learns to directly map an input low resolution HSI to a high resolution HSI with a specialized deep neural network. To well depict the complicated non-linear mapping function with a compact network, a grouped recursive module is embedded into the global residual structure to transform the input HSIs. In addition, we conjoin the traditional mean squared error (MSE) loss with the spectral angle mapper (SAM) loss together to learn the network parameters, which enables to reduce both the numerical error and spectral distortion in the super-resolution results, and ultimately improve the performance. Sufficient experiments on the benchmark HSI dataset demonstrate the effectiveness of the proposed method in terms of single HSI super-resolution.) <|cite_end|> propose grouped deep recursive residual network (GDRRN) to execute hyperspectral image SR task in space. As we mentioned earlier, obviously, this method does not take into account spectral resolution and thus may lead to spectral distortion of the restored hyperspectral image. Considering this limitation, Mei \textit{et al.} <|cite_start|> (Reference: Hyperspectral image spatial super-resolution via 3D full convolutional neural network: Hyperspectral images are well-known for their fine spectral resolution to discriminate different materials. However, their spatial resolution is relatively low due to the trade-off in imaging sensor technologies, resulting in limitations in their applications. Inspired by recent achievements in convolutional neural network (CNN) based super-resolution (SR) for natural images, a novel three-dimensional full CNN (3D-FCNN) is constructed for spatial SR of hyperspectral images in this paper. Specifically, 3D convolution is used to exploit both the spatial context of neighboring pixels and spectral correlation of neighboring bands, such that spectral distortion when directly applying traditional CNN based SR algorithms to hyperspectral images in band-wise manners is alleviated. Furthermore, a sensor-specific mode is designed for the proposed 3D-FCNN such that none of the samples from the target scene are required for training. Fine-tuning by a small number of training samples from the target scene can further improve the performance of such a sensor-specific method. Extensive experimental results on four benchmark datasets from two well-known hyperspectral sensors, namely hyperspectral digital imagery collection experiment (HYDICE) and reflective optics system imaging spectrometer (ROSIS) sensors, demonstrate that our proposed 3D-FCNN outperforms several existing SR methods by ensuring higher quality both in reconstruction and spectral fidelity.) <|cite_end|> present 3D full convolution neural network (3D-FCNN) to explore the relationship of the spatial information and adjacent pixels between spectra. Although this method effectively uncovers spatial information and spectral information between bands, it changes the size of the estimated hyperspectral image, which is not suitable for the purpose of image reconstruction.
To address these drawbacks, in this paper, we propose a novel spectral-spatial residual network for hyperspectral image super-resolution (SSRNet). Our method learns the mapping function in a supervised way without using RGB image corresponding to high-resolution hyperspectral image. The whole network uses 3D convolution to extract hyperspectral image features instead of 2D convolution. In each spatial-spectral residual module (SSRM), the network can adaptively learn more effective spatial and spectral features from all the hierarchical units. To reduce unaffordable memory usage and high computational cost, we employ separable 3D convolution to extract spatial information and spectral information between bands in residual unit. Through three evaluation indexes, we demonstrate that the performance of SSRNet is superior to the state-of-the-art hyperspectral image SR approaches based on deep learning on three datasets. Besides, our proposed SSRNet generates more realistic visual results compared with other methods, as shown in Fig. \ref{fig:fig1}.
In summary, our main contributions are follows:
$\bullet$ A novel spatial-spectral residual network (SSRNet) is proposed to reconstruct hyperspectral image. The network can explore the spatial information and spectral information between bands without changing the size of hyperspectral image. It significantly enhances the performance.
$\bullet$ The spatial-spectral residual module (SSRM) is designed to adaptively preserve the accumulated features through local feature fusion. It makes full use of all the hierarchical features in the unit, which enables the network to fully extract the features of hyperspectral images.
$\bullet$ Spatial and temporal separable 3D convolution is employed to extract spatial and spectral features in each unit, respectively. It can reduce unaffordable memory usage and high computational cost, and make the network easier to train.
The remainder of this paper is organized as follows: Section II describes existing hyperspectral image SR with CNNs and the detailed 3D convolution. Section III introduces our proposed SSRNet, including network structure, spectral-spatial residual module, skip connections, etc. Then, experiments on benchmark datasets are performed to verify our method in Section IV. Finally, Section V gives the conclusion.
Related Work
There exists an extensive body of literatures on hyperspectral image SR. Here we first outline several deep learning-based hyperspectral image SR methods. In order to better understand the proposed method, we then give a brief introduction to 3D convolution.
\subsection{Hyperspectral Image SR with CNNs}
\begin{figure}[t]
\centering
\includegraphics[height=2cm,width=0.4\textwidth]{threeD.pdf}
\caption{Spatial and temporal separable 3D convolution.}
\label{fig:threeD}
\end{figure}
Recently, deep learning-based methods <|cite_start|> (Reference: Ssf-cnn: Spatial and spectral fusion with cnn for hyperspectral image super-resolution: Fusing a low-resolution hyperspectral image with the corresponding high-resolution RGB image to obtain a high-resolution hyperspectral image is usually solved as an optimization problem with prior-knowledge such as sparsity representation and spectral physical properties as constraints, which have limited applicability. Deep convolutional neural network extracts more comprehensive features and is proved to be effective in upsampling RGB images. However, directly applying CNNs to upsample either the spatial or spectral dimension alone may not produce pleasing results due to the neglect of complementary information from both low resolution hyper spectral and high resolution RGB images. This paper proposes two types of novel CNN architectures to take advantages of spatial and spectral fusion for hyperspectral image superresolution. Experiment results on benchmark datasets validate that the proposed spatial and spectral fusion CNNs outperforms the state-of-the-art methods and baseline CNN architectures in both quantitative values and visual qualities) <|cite_end|> have achieved remarkable advantages in the field of hyperspectral image SR. Here, we will briefly introduce several methods with CNNs. Li \textit{et al.} <|cite_start|> (Reference: Hyperspectral image super-resolution using deep convolutional neural network: ) <|cite_end|> propose a deep spectral difference convolutional neural network (SDCNN) by using five convolutional layers to improve spatial resolution. Under spatial constraint strategy, it makes the reconstructed hyperspectral image preserve spectral information through post-processing. Jia \textit{et al.} <|cite_start|> (Reference: Hyperspectral image super-resolution with spectral–spatial network: ABSTRACT The super-resolution problem for hyperspectral images is currently one of the most challenging topics in remote sensing. Increasingly effective methods have been presented to solve this ill-posed problem under certain circumstances. In this article, we propose a new approach named the spectral–spatial network (SSN), which can effectively increase spatial resolution while keeping spectral information. The SSN consists of two sections: a spatial section and a spectral section that contribute to enhancing spatial resolution and preserving spectral information, respectively. The spatial section is proposed to learn end-to-end mapping between single-band images, from low-resolution and high-resolution hyperspectral images. In this section, we enhance the traditional sub-pixel convolutional layer by adding a maximum variance principle that can realize nonlinear fitting through piecewise linearization. The spectral section aims to fine-tune spectral caves to keep the spectral signature with a spectral angle error loss function. In order to make the SSN converge quickly, we also develop a corresponding three-step training method. The experimental results on two databases, with both indoor and outdoor scenes, show that our proposed method performs better than the existing state-of-the-art methods.) <|cite_end|> present spectral-spatial network (SSN), including spatial and spectral sections. They try to learn the mapping function between low-resolution and high-resolution images and fine-tune spectrum. Yuan \textit{et al.} <|cite_start|> (Reference: Hyperspectral image superresolution by transfer learning: Hyperspectral image superresolution is a highly attractive topic in computer vision and has attracted many researchers’ attention. However, nearly all the existing methods assume that multiple observations of the same scene are required with the observed low-resolution hyperspectral image. This limits the application of superresolution. In this paper, we propose a new framework to enhance the resolution of hyperspectral images by exploiting the knowledge from natural images: The relationship between low/high-resolution images is the same as that between low/high-resolution hyperspectral images. In the proposed framework, the mapping between low- and high-resolution images can be learned by deep convolutional neural network and be transferred to hyperspectral image by borrowing the idea of transfer learning. In addition, to study the spectral characteristic between low- and high-resolution hyperspectral image, collaborative nonnegative matrix factorization (CNMF) is proposed to enforce collaborations between the low- and high-resolution hyperspectral images, which encourages the estimated solution to extract the same endmembers with low-resolution hyperspectral image. The experimental results on ground based and remote sensing data suggest that the proposed method achieves comparable performance without requiring any auxiliary images of the same scene.) <|cite_end|> utilize the knowledge from natural image to restore high-resolution hyperspectral image by transfer learning, and collaborative nonnegative matrix factorization is proposed to enforce collaborations between low-resolution and high-resolution hyperspectral images. All of these methods need two steps to achieve image reconstruction, that is, the algorithm first improves the spatial resolution. To avoid spectral distortion, some constraint criteria are then employed to retain the spectral information. It is clear that the spatial resolution may be changed while maintaining the spectral information.
Considering this issue, Li \textit{et al.} <|cite_start|> (Reference: Single Hyperspectral Image Super-Resolution with Grouped Deep Recursive Residual Network: Fusing a low spatial resolution hyperspectral images (HSIs) with an high spatial resolution conventional (e.g., RGB) image has underpinned much of recent progress in HSIs super-resolution. However, such a scheme requires this pair of images to be well registered, which is often difficult to be complied with in real applications. To address this problem, we present a novel single HSI super-resolution method, termed Grouped Deep Recursive Residual Network (GDRRN), which learns to directly map an input low resolution HSI to a high resolution HSI with a specialized deep neural network. To well depict the complicated non-linear mapping function with a compact network, a grouped recursive module is embedded into the global residual structure to transform the input HSIs. In addition, we conjoin the traditional mean squared error (MSE) loss with the spectral angle mapper (SAM) loss together to learn the network parameters, which enables to reduce both the numerical error and spectral distortion in the super-resolution results, and ultimately improve the performance. Sufficient experiments on the benchmark HSI dataset demonstrate the effectiveness of the proposed method in terms of single HSI super-resolution.) <|cite_end|> and Wang \textit{et al.} <|cite_start|> (Reference: Deep Residual Convolutional Neural Network for Hyperspectral Image Super-Resolution: ) <|cite_end|> introduce spectral angle error and set a new loss function by combining it with the mean square error. When training the network, these methods combine two error functions and deliberately reduce the distortion of the spectrum. However, it affects the performance of the reconstructed spatial resolution. Unlike natural image, the hyperspectral image has tens to hundreds of continuous spectral bands. Mei \textit{et al.} <|cite_start|> (Reference: Hyperspectral image spatial super-resolution via 3D full convolutional neural network: Hyperspectral images are well-known for their fine spectral resolution to discriminate different materials. However, their spatial resolution is relatively low due to the trade-off in imaging sensor technologies, resulting in limitations in their applications. Inspired by recent achievements in convolutional neural network (CNN) based super-resolution (SR) for natural images, a novel three-dimensional full CNN (3D-FCNN) is constructed for spatial SR of hyperspectral images in this paper. Specifically, 3D convolution is used to exploit both the spatial context of neighboring pixels and spectral correlation of neighboring bands, such that spectral distortion when directly applying traditional CNN based SR algorithms to hyperspectral images in band-wise manners is alleviated. Furthermore, a sensor-specific mode is designed for the proposed 3D-FCNN such that none of the samples from the target scene are required for training. Fine-tuning by a small number of training samples from the target scene can further improve the performance of such a sensor-specific method. Extensive experimental results on four benchmark datasets from two well-known hyperspectral sensors, namely hyperspectral digital imagery collection experiment (HYDICE) and reflective optics system imaging spectrometer (ROSIS) sensors, demonstrate that our proposed 3D-FCNN outperforms several existing SR methods by ensuring higher quality both in reconstruction and spectral fidelity.) <|cite_end|> take advantage of this property of hyperspectral image and adopt 3D convolution to extract the features, which effectively retains the information of the original spectrum and improves the performance of image SR. However, the size of reconstructed image is changed.
\subsection{3D Convolution}
For natural image SR, the scholars usually employ 2D convolution to extract the features and obtain good performance <|cite_start|> (Reference: Fast Spatio-Temporal Residual Network for Video Super-Resolution: Recently, deep learning based video super-resolution (SR) methods have achieved promising performance. To simultaneously exploit the spatial and temporal information of videos, employing 3-dimensional (3D) convolutions is a natural approach. However, straight utilizing 3D convolutions may lead to an excessively high computational complexity which restricts the depth of video SR models and thus undermine the performance. In this paper, we present a novel fast spatio-temporal residual network (FSTRN) to adopt 3D convolutions for the video SR task in order to enhance the performance while maintaining a low computational load. Specifically, we propose a fast spatio-temporal residual block (FRB) that divide each 3D filter to the product of two 3D filters, which have considerably lower dimensions. Furthermore, we design a cross-space residual learning that directly links the low-resolution space and the high-resolution space, which can greatly relieve the computational burden on the feature fusion and up-scaling parts. Extensive evaluations and comparisons on benchmark datasets validate the strengths of the proposed approach and demonstrate that the proposed network significantly outperforms the current state-of-the-art methods.) <|cite_end|> <|cite_start|> (Reference: Second-order Attention Network for Single Image Super-resolution: Recently, deep convolutional neural networks (CNNs) have been widely explored in single image super-resolution (SISR) and obtained remarkable performance. However, most of the existing CNN-based SISR methods mainly focus on wider or deeper architecture design, neglecting to explore the feature correlations of intermediate layers, hence hindering the representational power of CNNs. To address this issue, in this paper, we propose a second-order attention network (SAN) for more powerful feature expression and feature correlation learning. Specifically, a novel train- able second-order channel attention (SOCA) module is developed to adaptively rescale the channel-wise features by using second-order feature statistics for more discriminative representations. Furthermore, we present a non-locally enhanced residual group (NLRG) structure, which not only incorporates non-local operations to capture long-distance spatial contextual information, but also contains repeated local-source residual attention groups (LSRAG) to learn increasingly abstract feature representations. Experimental results demonstrate the superiority of our SAN network over state-of-the-art SISR methods in terms of both quantitative metrics and visual quality.) <|cite_end|>. As we introduced earlier, the hyperspectral image contains many continuous bands, which results in a significant characteristic that there is a great correlation between adjacent bands <|cite_start|> (Reference: Hyperspectral band selection via adaptive subspace partition strategy: Band selection is considered as a direct and effective method to reduce redundancy, which is to select some informative and distinctive bands from the original hyperspectral image cube. Recently, many clustering-based band selection methods have been proposed, but most of them only take into account redundancy between bands, neglecting the amount of information in the subset of selected bands. Furthermore, these algorithms never consider the hyperspectral bands as ordered. Based on these two facts, we propose a novel approach for hyperspectral band selection via an adaptive subspace partition strategy (ASPS). The main contributions are as follows: 1) the ASPS is adopted to partition the hyperspectral image cube into multiple subcubes by maximizing the ratio of interclass distance to intraclass distance; 2) unlike previous methods, we estimate the band noise and select the band containing minimum noise (high-quality band) in each subcube to represent the whole subcube; and 3) adaptive subspace partition is viewed as a general framework and thus forms the variant version. Experimental results on three public datasets show that the proposed method achieves satisfactory results in both accuracy and efficiency than some state-of-the-art algorithms.) <|cite_end|>. If we directly utilize 2D convolution to conduct hyperspectral image SR task, it will make it impossible to effectively exploit potential features between bands. Therefore, in order to make full use of this characteristic, we design network by using 3D convolution to analyze the spatial and spectral features of hyperspectral image in our paper.
Since 3D convolution takes into account the inter-frame motion information in the time dimension, it is widely used in video classification <|cite_start|> (Reference: Video Classification with Channel-Separated Convolutional Networks: Group convolution has been shown to offer great computational savings in various 2D convolutional architectures for image classification. It is natural to ask: 1) if group convolution can help to alleviate the high computational cost of video classification networks; 2) what factors matter the most in 3D group convolutional networks; and 3) what are good computation/accuracy trade-offs with 3D group convolutional networks. This paper studies the effects of different design choices in 3D group convolutional networks for video classification. We empirically demonstrate that the amount of channel interactions plays an important role in the accuracy of 3D group convolutional networks. Our experiments suggest two main findings. First, it is a good practice to factorize 3D convolutions by separating channel interactions and spatiotemporal interactions as this leads to improved accuracy and lower computational cost. Second, 3D channel-separated convolutions provide a form of regularization, yielding lower training accuracy but higher test accuracy compared to 3D convolutions. These two empirical findings lead us to design an architecture -- Channel-Separated Convolutional Network (CSN) -- which is simple, efficient, yet accurate. On Sports1M, Kinetics, and Something-Something, our CSNs are comparable with or better than the state-of-the-art while being 2-3 times more efficient.) <|cite_end|>, action recognition <|cite_start|> (Reference: MiCT: Mixed 3D/2D Convolutional Tube for Human Action Recognition: Human actions in videos are three-dimensional (3D) signals. Recent attempts use 3D convolutional neural networks (CNNs) to explore spatio-temporal information for human action recognition. Though promising, 3D CNNs have not achieved high performanceon on this task with respect to their well-established two-dimensional (2D) counterparts for visual recognition in still images. We argue that the high training complexity of spatio-temporal fusion and the huge memory cost of 3D convolution hinder current 3D CNNs, which stack 3D convolutions layer by layer, by outputting deeper feature maps that are crucial for high-level tasks. We thus propose a Mixed Convolutional Tube (MiCT) that integrates 2D CNNs with the 3D convolution module to generate deeper and more informative feature maps, while reducing training complexity in each round of spatio-temporal fusion. A new end-to-end trainable deep 3D network, MiCT-Net, is also proposed based on the MiCT to better explore spatio-temporal information in human actions. Evaluations on three well-known benchmark datasets (UCF101, Sport-1M and HMDB-51) show that the proposed MiCT-Net significantly outperforms the original 3D CNNs. Compared with state-of-the-art approaches for action recognition on UCF101 and HMDB51, our MiCT-Net yields the best performance.) <|cite_end|> and other fields. Unlike 2D convolution, the 3D convolution operation is implemented by convolving a 3D kernel with feature maps. Intuitively, the number of parameters of the training network using 3D convolution is an order of magnitude more than that of the 2D convolution. To address this problem, Xie \textit{et al.} <|cite_start|> (Reference: Rethinking Spatiotemporal Feature Learning: Speed-Accuracy Trade-offs in Video Classification: Despite the steady progress in video analysis led by the adoption of convolutional neural networks (CNNs), the relative improvement has been less drastic as that in 2D static image classification. Three main challenges exist including spatial (image) feature representation, temporal information representation, and model/computation complexity. It was recently shown by Carreira and Zisserman that 3D CNNs, inflated from 2D networks and pretrained on ImageNet, could be a promising way for spatial and temporal representation learning. However, as for model/computation complexity, 3D CNNs are much more expensive than 2D CNNs and prone to overfit. We seek a balance between speed and accuracy by building an effective and efficient video classification system through systematic exploration of critical network design choices. In particular, we show that it is possible to replace many of the 3D convolutions by low-cost 2D convolutions. Rather surprisingly, best result (in both speed and accuracy) is achieved when replacing the 3D convolutions at the bottom of the network, suggesting that temporal representation learning on high-level semantic features is more useful. Our conclusion generalizes to datasets with very different properties. When combined with several other cost-effective designs including separable spatial/temporal convolution and feature gating, our system results in an effective video classification system that that produces very competitive results on several action classification benchmarks (Kinetics, Something-something, UCF101 and HMDB), as well as two action detection (localization) benchmarks (JHMDB and UCF101-24).) <|cite_end|> develop typical separable 3D CNNs (S3D) model to accelerate video classification. In this model, the standard 3D convolution is replaced by spatial and temporal separable 3D convolution (see Fig. \ref{fig:threeD}), which demonstrates that this way can effectively reduce the number of parameters while still maintain good performance. <|paper_end|> | [
"<|reference_start|> Hyperspectral image analysis techniques for the detection and classification of the early onset of plant disease and stress: <|reference_end|>",
"<|reference_start|> Super-resolution reconstruction of hyperspectral images: Hyperspectral imagery is used for a wide variety of applications, including target detection, tacking, agricultural monitoring and natural resources exploration. The main reason for using hyperspectral imagery is that these images reveal spectral information about the scene that are not available in a single band. Unfortunately, many factors such as sensor noise and atmospheric scattering degrade the spatial quality of these images. Recently, many algorithms are introduced in the literature to improve the resolution of hyperspectral images [7]. In this paper, we propose a new method to produce high resolution bands from low resolution bands that are strongly correlated to the corresponding high resolution panchromatic image. The proposed method is based on using the local correlation instead of using the global correlation to improve the estimated interpolation in order to construct the high resolution image. The utilization of local correlation significantly improved the resolution of high resolution images when compared to the corresponding results obtained using the traditional algorithms. The local correlation is implemented by using predefined small windows across the low resolution image. In addition, numerous experiments are conducted to investigate the effect of the chosen window size in the image quality. Experiments results obtained using real life hyperspectral imagery is presented to verify the effectiveness of the proposed algorithm. <|reference_end|>",
"<|reference_start|> RGB-guided hyperspectral image upsampling: Hyperspectral imaging usually lack of spatial resolution due to limitations of hardware design of imaging sensors. On the contrary, latest imaging sensors capture a RGB image with resolution of multiple times larger than a hyperspectral image. In this paper, we present an algorithm to enhance and upsample the resolution of hyperspectral images. Our algorithm consists of two stages: spatial upsampling stage and spectrum substitution stage. The spatial upsampling stage is guided by a high resolution RGB image of the same scene, and the spectrum substitution stage utilizes sparse coding to locally refine the upsampled hyperspectral image through dictionary substitution. Experiments show that our algorithm is highly effective and has outperformed state-of-the-art matrix factorization based approaches. <|reference_end|>",
"<|reference_start|> Learning a Deep Convolutional Network for Image Super-Resolution: <|reference_end|>"
] | [
3,
7,
10,
19
] | {"<|cite_1|>": "ss-2386681", "<|cite_2|>": "ss-2490776", "<|cite_3|>": "ss-2386682", "<|cite_4|>": "ss-1268928", "<|cite_6|>": "arxiv-203051", "<|multi_cite_7_1|>": "ss-2386683", "<|multi_cite_7_2|>": "ss-1268936", "<|multi_cite_7_3|>": "ss-1268931", "<|cite_8|>": "arxiv-174415", "<|cite_9|>": "arxiv-154921", "<|multi_cite_10_1|>": "ss-1089264", "<|multi_cite_10_2|>": "ss-1497615", "<|multi_cite_10_3|>": "ss-1009259", "<|cite_11|>": "ss-1089264", "<|cite_12|>": "ss-2298646", "<|cite_13|>": "ss-689032", "<|cite_14|>": "ss-1672224", "<|multi_cite_15_1|>": "arxiv-200127", "<|multi_cite_15_2|>": "arxiv-149524", "<|cite_16|>": "ss-940312", "<|cite_17|>": "arxiv-128911", "<|cite_18|>": "arxiv-105885", "<|cite_19|>": "ss-940315", "<|cite_20|>": "ss-693450", "<|cite_21|>": "ss-1268940", "<|cite_22|>": "ss-1672223", "<|cite_23|>": "ss-1519034", "<|cite_24|>": "ss-2386684", "<|cite_25|>": "ss-1500412", "<|cite_26|>": "ss-693450", "<|cite_27|>": "ss-1672222", "<|cite_28|>": "ss-1268940", "<|multi_cite_29_1|>": "arxiv-198401", "<|multi_cite_29_2|>": "ss-940316", "<|cite_30|>": "ss-1283381", "<|cite_31|>": "arxiv-198375", "<|cite_32|>": "ss-1733463", "<|cite_33|>": "arxiv-143038"} |
1809.06445-0 | <|paper_start|> Title: Efficient 2D-3D Matching for Multi-Camera Visual Localization
Abstract: Efficient 2D-3D Matching for Multi-Camera Visual Localization: Visual localization, i.e., determining the position and orientation of a vehicle with respect to a map, is a key problem in autonomous driving. We present a multicamera visual inertial localization algorithm for large scale environments. To efficiently and effectively match features against a pre-built global 3D map, we propose a prioritized feature matching scheme for multi-camera systems. In contrast to existing works, designed for monocular cameras, we (1) tailor the prioritization function to the multi-camera setup and (2) run feature matching and pose estimation in parallel. This significantly accelerates the matching and pose estimation stages and allows us to dynamically adapt the matching efforts based on the surrounding environment. In addition, we show how pose priors can be integrated into the localization system to increase efficiency and robustness. Finally, we extend our algorithm by fusing the absolute pose estimates with motion estimates from a multi-camera visual inertial odometry pipeline (VIO). This results in a system that provides reliable and drift-less pose estimation. Extensive experiments show that our localization runs fast and robust under varying conditions, and that our extended algorithm enables reliable real-time pose estimation.
Introduction
Visual localization is the problem of estimating the position and orientation, \ie, the camera pose, from which a given query image was taken.
This problem plays a key role in autonomous navigation, \eg, for self-driving cars <|cite_start|> (Reference: 3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection: Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction.) <|cite_end|>and in Simultaneous Localization and Mapping (SLAM) <|cite_start|> (Reference: ORB-SLAM: a Versatile and Accurate Monocular SLAM System: This paper presents ORB-SLAM, a feature-based monocular SLAM system that operates in real time, in small and large, indoor and outdoor environments. The system is robust to severe motion clutter, allows wide baseline loop closing and relocalization, and includes full automatic initialization. Building on excellent algorithms of recent years, we designed from scratch a novel system that uses the same features for all SLAM tasks: tracking, mapping, relocalization, and loop closing. A survival of the fittest strategy that selects the points and keyframes of the reconstruction leads to excellent robustness and generates a compact and trackable map that only grows if the scene content changes, allowing lifelong operation. We present an exhaustive evaluation in 27 sequences from the most popular datasets. ORB-SLAM achieves unprecedented performance with respect to other state-of-the-art monocular SLAM approaches. For the benefit of the community, we make the source code public.) <|cite_end|>. It is also encountered in many 3D computer vision algorithms such as Structure-from-Motion (SfM) <|cite_start|> (Reference: Structure-From-Motion Revisited: Incremental Structure-from-Motion is a prevalent strategy for 3D reconstruction from unordered image collections. While incremental reconstruction systems have tremendously advanced in all regards, robustness, accuracy, completeness, and scalability remain the key problems towards building a truly general-purpose pipeline. We propose a new SfM technique that improves upon the state of the art to make a further step towards this ultimate goal. The full reconstruction pipeline is released to the public as an open-source implementation.) <|cite_end|>, camera calibration <|cite_start|> (Reference: 3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection: Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction.) <|cite_end|>, and Augmented Reality <|cite_start|> (Reference: Get Out of My Lab: Large-scale, Real-Time Visual-Inertial
Localization: Accurately estimating a robot's pose relative to a global scene model and precisely tracking the pose in real-time is a fundamental problem for navigation and obstacle avoidance tasks. Due to the computational complexity of localization against a large map and the memory consumed by the model, state-of-the-art approaches are either limited to small workspaces or rely on a server-side system to query the global model while tracking the pose locally. The latter approaches face the problem of smoothly integrating the server's pose estimates into the trajectory computed locally to avoid temporal discontinuities. In this paper, we demonstrate that large-scale, real-time pose estimation and tracking can be performed on mobile platforms with limited resources without the use of an external server. This is achieved by employing map and descriptor compression schemes as well as efficient search algorithms from computer vision. We derive a formulation for integrating the global pose information into a local state estimator that produces much smoother trajectories than current approaches. Through detailed experiments, we evaluate each of our design choices individually and document its impact on the overall system performance, demonstrating that our approach outperforms state-of-the-art algorithms for localization at scale.) <|cite_end|> <|cite_start|> (Reference: Scalable 6-DOF Localization on Mobile Devices: ) <|cite_end|>.
State-of-the-art approaches for visual localization are \emph{structure-based}, \ie, they explicitly or implicitly use a 3D model to represent the scene.
Explicit methods typically employ a sparse 3D point cloud constructed via SfM <|cite_start|> (Reference: Worldwide Pose Estimation Using 3D Point Clouds: ) <|cite_end|> <|cite_start|> (Reference: Get Out of My Lab: Large-scale, Real-Time Visual-Inertial
Localization: Accurately estimating a robot's pose relative to a global scene model and precisely tracking the pose in real-time is a fundamental problem for navigation and obstacle avoidance tasks. Due to the computational complexity of localization against a large map and the memory consumed by the model, state-of-the-art approaches are either limited to small workspaces or rely on a server-side system to query the global model while tracking the pose locally. The latter approaches face the problem of smoothly integrating the server's pose estimates into the trajectory computed locally to avoid temporal discontinuities. In this paper, we demonstrate that large-scale, real-time pose estimation and tracking can be performed on mobile platforms with limited resources without the use of an external server. This is achieved by employing map and descriptor compression schemes as well as efficient search algorithms from computer vision. We derive a formulation for integrating the global pose information into a local state estimator that produces much smoother trajectories than current approaches. Through detailed experiments, we evaluate each of our design choices individually and document its impact on the overall system performance, demonstrating that our approach outperforms state-of-the-art algorithms for localization at scale.) <|cite_end|> <|cite_start|> (Reference: Efficient \& Effective Prioritized Matching for Large-Scale Image-Based Localization: Accurately determining the position and orientation from which an image was taken, i.e., computing the camera pose, is a fundamental step in many Computer Vision applications. The pose can be recovered from 2D-3D matches between 2D image positions and points in a 3D model of the scene. Recent advances in Structure-from-Motion allow us to reconstruct large scenes and thus create the need for image-based localization methods that efficiently handle large-scale 3D models while still being effective, i.e., while localizing as many images as possible. This paper presents an approach for large scale image-based localization that is both efficient and effective. At the core of our approach is a novel prioritized matching step that enables us to first consider features more likely to yield 2D-to-3D matches and to terminate the correspondence search as soon as enough matches have been found. Matches initially lost due to quantization are efficiently recovered by integrating 3D-to-2D search. We show how visibility information from the reconstruction process can be used to improve the efficiency of our approach. We evaluate the performance of our method through extensive experiments and demonstrate that it offers the best combination of efficiency and effectiveness among current state-of-the-art approaches for localization.) <|cite_end|> <|cite_start|> (Reference: City-scale Localization for Cameras with Known Vertical Direction: We consider the problem of localizing a novel image in a large 3D model, given that the gravitational vector is known. In principle, this is just an instance of camera pose estimation, but the scale of the problem introduces some interesting challenges. Most importantly, it makes the correspondence problem very difficult so there will often be a significant number of outliers to handle. To tackle this problem, we use recent theoretical as well as technical advances. Many modern cameras and phones have gravitational sensors that allow us to reduce the search space. Further, there are new techniques to efficiently and reliably deal with extreme rates of outliers. We extend these methods to camera pose estimation by using accurate approximations and fast polynomial solvers. Experimental results are given demonstrating that it is possible to reliably estimate the camera pose despite cases with more than 99 percent outlier correspondences in city-scale models with several millions of 3D points.) <|cite_end|> <|cite_start|> (Reference: Camera pose voting for large-scale image-based localization: Image-based localization approaches aim to determine the camera pose from which an image was taken. Finding correct 2D-3D correspondences between query image features and 3D points in the scene model becomes harder as the size of the model increases. Current state-of-the-art methods therefore combine elaborate matching schemes with camera pose estimation techniques that are able to handle large fractions of wrong matches. In this work we study the benefits and limitations of spatial verification compared to appearance-based filtering. We propose a voting-based pose estimation strategy that exhibits O(n) complexity in the number of matches and thus facilitates to consider much more matches than previous approaches - whose complexity grows at least quadratically. This new outlier rejection formulation enables us to evaluate pose estimation for 1-to-many matches and to surpass the state-of-the-art. At the same time, we show that using more matches does not automatically lead to a better performance.) <|cite_end|>, allowing them to associate each 3D point with one or more local image descriptors.
For a given query image, they establish a set of 2D-3D correspondences by comparing the descriptors of local features extracted from the image with the 3D point descriptors.
Using these matches, they then estimate the camera pose of the query by applying an $n$-point-pose solver <|cite_start|> (Reference: Review and analysis of solutions of the three point perspective pose estimation problem: ) <|cite_end|> <|cite_start|> (Reference: {Real-Time Solution to the Absolute Pose Problem with Unknown Radial Distortion and Focal Length: The problem of determining the absolute position and orientation of a camera from a set of 2D-to-3D point correspondences is one of the most important problems in computer vision with a broad range of applications. In this paper we present a new solution to the absolute pose problem for camera with unknown radial distortion and unknown focal length from five 2D-to-3D point correspondences. Our new solver is numerically more stable, more accurate, and significantly faster than the existing state-of-the-art minimal four point absolute pose solvers for this problem. Moreover, our solver results in less solutions and can handle larger radial distortions. The new solver is straightforward and uses only simple concepts from linear algebra. Therefore it is simpler than the state-of-the-art Groebner basis solvers. We compare our new solver with the existing state-of-the-art solvers and show its usefulness on synthetic and real datasets.) <|cite_end|> <|cite_start|> (Reference: Minimal solutions for the multi-camera pose estimation problem: In this paper, we propose a novel formulation to solve the pose estimation problem of a calibrated multi-camera system. The non-central rays that pass through the 3D world points and multi-camera system are elegantly represented as Plücker lines. This allows us to solve for the depth of the points along the Plücker lines with a minimal set of three-point correspondences. We show that the minimal solution for the depth of the points along the Plücker lines is an eight-degree polynomial that gives up to eight real solutions. The coordinates of the 3D world points in the multi-camera frame are computed from the known depths. Consequently, the pose of the multi-camera system, i.e. the rigid transformation between the world and multi-camera frames can be obtained from absolute orientation. We also derive a closed-form minimal solution for the absolute orientation. This removes the need for the computationally expensive singular value decompositions during the evaluations of the possible solutions for the depths. We identify the correct solution and do robust estimation with RANSAC. Finally, the solution is further refined by including all the inlier correspondences in a nonlinear refinement step. We verify our approach by showing comparisons with other existing approaches and results from large-scale real-world datasets.) <|cite_end|>inside a RANSAC loop <|cite_start|> (Reference: Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing) <|cite_end|>.
In contrast, implicit approaches <|cite_start|> (Reference: Learning Less is More - 6D Camera Localization via 3D Surface Regression: Popular research areas like autonomous driving and augmented reality have renewed the interest in image-based camera localization. In this work, we address the task of predicting the 6D camera pose from a single RGB image in a given 3D environment. With the advent of neural networks, previous works have either learned the entire camera localization process, or multiple components of a camera localization pipeline. Our key contribution is to demonstrate and explain that learning a single component of this pipeline is sufficient. This component is a fully convolutional neural network for densely regressing so-called scene coordinates, defining the correspondence between the input image and the 3D scene space. The neural network is prepended to a new end-to-end trainable pipeline. Our system is efficient, highly accurate, robust in training, and exhibits outstanding generalization capabilities. It exceeds state-of-the-art consistently on indoor and outdoor datasets. Interestingly, our approach surpasses existing techniques even without utilizing a 3D model of the scene during training, since the network is able to discover 3D scene geometry automatically, solely from single-view constraints.) <|cite_end|> <|cite_start|> (Reference: On-the-Fly Adaptation of Regression Forests for Online Camera Relocalisation: Camera relocalisation is an important problem in computer vision, with applications in simultaneous localisation and mapping, virtual/augmented reality and navigation. Common techniques either match the current image against keyframes with known poses coming from a tracker, or establish 2D-to-3D correspondences between keypoints in the current image and points in the scene in order to estimate the camera pose. Recently, regression forests have become a popular alternative to establish such correspondences. They achieve accurate results, but must be trained offline on the target scene, preventing relocalisation in new environments. In this paper, we show how to circumvent this limitation by adapting a pre-trained forest to a new scene on the fly. Our adapted forests achieve relocalisation performance that is on par with that of offline forests, and our approach runs in under 150ms, making it desirable for real-time systems that require online relocalisation.) <|cite_end|> <|cite_start|> (Reference: Random forests versus Neural Networks — What's best for camera localization?: This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step. Random Forests (RFs) are typically used. On the other hand. Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset [1]. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.) <|cite_end|> <|cite_start|> (Reference: Scene coordinate regression forests for camera relocalization in RGB-D images: We address the problem of inferring the pose of an RGB-D camera relative to a known 3D scene, given only a single acquired image. Our approach employs a regression forest that is capable of inferring an estimate of each pixel's correspondence to 3D points in the scene's world coordinate frame. The forest uses only simple depth and RGB pixel comparison features, and does not require the computation of feature descriptors. The forest is trained to be capable of predicting correspondences at any pixel, so no interest point detectors are required. The camera pose is inferred using a robust optimization scheme. This starts with an initial set of hypothesized camera poses, constructed by applying the forest at a small fraction of image pixels. Preemptive RANSAC then iterates sampling more pixels at which to evaluate the forest, counting inliers, and refining the hypothesized poses. We evaluate on several varied scenes captured with an RGB-D camera and observe that the proposed technique achieves highly accurate relocalization and substantially out-performs two state of the art baselines.) <|cite_end|>forego explicit descriptor matching.
Instead, they directly learn the 2D-3D matching function by learning a mapping from image patches to 3D scene point coordinates.
Again, the resulting 2D-3D correspondences are used for RANSAC-based pose estimation.
Implicit approaches can achieve a higher pose accuracy compared to explicit ones <|cite_start|> (Reference: On-the-Fly Adaptation of Regression Forests for Online Camera Relocalisation: Camera relocalisation is an important problem in computer vision, with applications in simultaneous localisation and mapping, virtual/augmented reality and navigation. Common techniques either match the current image against keyframes with known poses coming from a tracker, or establish 2D-to-3D correspondences between keypoints in the current image and points in the scene in order to estimate the camera pose. Recently, regression forests have become a popular alternative to establish such correspondences. They achieve accurate results, but must be trained offline on the target scene, preventing relocalisation in new environments. In this paper, we show how to circumvent this limitation by adapting a pre-trained forest to a new scene on the fly. Our adapted forests achieve relocalisation performance that is on par with that of offline forests, and our approach runs in under 150ms, making it desirable for real-time systems that require online relocalisation.) <|cite_end|> <|cite_start|> (Reference: Learning Less is More - 6D Camera Localization via 3D Surface Regression: Popular research areas like autonomous driving and augmented reality have renewed the interest in image-based camera localization. In this work, we address the task of predicting the 6D camera pose from a single RGB image in a given 3D environment. With the advent of neural networks, previous works have either learned the entire camera localization process, or multiple components of a camera localization pipeline. Our key contribution is to demonstrate and explain that learning a single component of this pipeline is sufficient. This component is a fully convolutional neural network for densely regressing so-called scene coordinates, defining the correspondence between the input image and the 3D scene space. The neural network is prepended to a new end-to-end trainable pipeline. Our system is efficient, highly accurate, robust in training, and exhibits outstanding generalization capabilities. It exceeds state-of-the-art consistently on indoor and outdoor datasets. Interestingly, our approach surpasses existing techniques even without utilizing a 3D model of the scene during training, since the network is able to discover 3D scene geometry automatically, solely from single-view constraints.) <|cite_end|>.
Yet, they currently do not scale to larger outdoor scenes <|cite_start|> (Reference: Learning Less is More - 6D Camera Localization via 3D Surface Regression: Popular research areas like autonomous driving and augmented reality have renewed the interest in image-based camera localization. In this work, we address the task of predicting the 6D camera pose from a single RGB image in a given 3D environment. With the advent of neural networks, previous works have either learned the entire camera localization process, or multiple components of a camera localization pipeline. Our key contribution is to demonstrate and explain that learning a single component of this pipeline is sufficient. This component is a fully convolutional neural network for densely regressing so-called scene coordinates, defining the correspondence between the input image and the 3D scene space. The neural network is prepended to a new end-to-end trainable pipeline. Our system is efficient, highly accurate, robust in training, and exhibits outstanding generalization capabilities. It exceeds state-of-the-art consistently on indoor and outdoor datasets. Interestingly, our approach surpasses existing techniques even without utilizing a 3D model of the scene during training, since the network is able to discover 3D scene geometry automatically, solely from single-view constraints.) <|cite_end|> <|cite_start|> (Reference: Semantic Visual Localization: Robust visual localization under a wide range of viewing conditions is a fundamental problem in computer vision. Handling the difficult cases of this problem is not only very challenging but also of high practical relevance, e.g., in the context of life-long localization for augmented reality or autonomous robots. In this paper, we propose a novel approach based on a joint 3D geometric and semantic understanding of the world, enabling it to succeed under conditions where previous approaches failed. Our method leverages a novel generative model for descriptor learning, trained on semantic scene completion as an auxiliary task. The resulting 3D descriptors are robust to missing observations by encoding high-level 3D geometric and semantic information. Experiments on several challenging large-scale localization datasets demonstrate reliable localization under extreme viewpoint, illumination, and geometry changes.) <|cite_end|>.
Most explicit structure-based localization methods focus on the monocular (single image) case, \eg, Augmented Reality on smartphones and tablets <|cite_start|> (Reference: Wide Area Localization on Mobile Phones: We present a fast and memory efficient method for localizing a mobile user's 6DOF pose from a single camera image. Our approach registers a view with respect to a sparse 3D point reconstruction. The 3D point dataset is partitioned into pieces based on visibility constraints and occlusion culling, making it scalable and efficient to handle. Starting with a coarse guess, our system only considers features that can be seen from the user's position. Our method is resource efficient, usually requiring only a few megabytes of memory, thereby making it feasible to run on low-end devices such as mobile phones. At the same time it is fast enough to give instant results on this device class.) <|cite_end|> <|cite_start|> (Reference: {Parallel Tracking and Mapping for Small AR Workspaces: This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems.) <|cite_end|> <|cite_start|> (Reference: Get Out of My Lab: Large-scale, Real-Time Visual-Inertial
Localization: Accurately estimating a robot's pose relative to a global scene model and precisely tracking the pose in real-time is a fundamental problem for navigation and obstacle avoidance tasks. Due to the computational complexity of localization against a large map and the memory consumed by the model, state-of-the-art approaches are either limited to small workspaces or rely on a server-side system to query the global model while tracking the pose locally. The latter approaches face the problem of smoothly integrating the server's pose estimates into the trajectory computed locally to avoid temporal discontinuities. In this paper, we demonstrate that large-scale, real-time pose estimation and tracking can be performed on mobile platforms with limited resources without the use of an external server. This is achieved by employing map and descriptor compression schemes as well as efficient search algorithms from computer vision. We derive a formulation for integrating the global pose information into a local state estimator that produces much smoother trajectories than current approaches. Through detailed experiments, we evaluate each of our design choices individually and document its impact on the overall system performance, demonstrating that our approach outperforms state-of-the-art algorithms for localization at scale.) <|cite_end|>, by developing strategies for efficient matching <|cite_start|> (Reference: Location Recognition Using Prioritized Feature Matching: ) <|cite_end|> <|cite_start|> (Reference: Efficient \& Effective Prioritized Matching for Large-Scale Image-Based Localization: Accurately determining the position and orientation from which an image was taken, i.e., computing the camera pose, is a fundamental step in many Computer Vision applications. The pose can be recovered from 2D-3D matches between 2D image positions and points in a 3D model of the scene. Recent advances in Structure-from-Motion allow us to reconstruct large scenes and thus create the need for image-based localization methods that efficiently handle large-scale 3D models while still being effective, i.e., while localizing as many images as possible. This paper presents an approach for large scale image-based localization that is both efficient and effective. At the core of our approach is a novel prioritized matching step that enables us to first consider features more likely to yield 2D-to-3D matches and to terminate the correspondence search as soon as enough matches have been found. Matches initially lost due to quantization are efficiently recovered by integrating 3D-to-2D search. We show how visibility information from the reconstruction process can be used to improve the efficiency of our approach. We evaluate the performance of our method through extensive experiments and demonstrate that it offers the best combination of efficiency and effectiveness among current state-of-the-art approaches for localization.) <|cite_end|>or for scaling to larger or more complex scenes <|cite_start|> (Reference: City-scale Localization for Cameras with Known Vertical Direction: We consider the problem of localizing a novel image in a large 3D model, given that the gravitational vector is known. In principle, this is just an instance of camera pose estimation, but the scale of the problem introduces some interesting challenges. Most importantly, it makes the correspondence problem very difficult so there will often be a significant number of outliers to handle. To tackle this problem, we use recent theoretical as well as technical advances. Many modern cameras and phones have gravitational sensors that allow us to reduce the search space. Further, there are new techniques to efficiently and reliably deal with extreme rates of outliers. We extend these methods to camera pose estimation by using accurate approximations and fast polynomial solvers. Experimental results are given demonstrating that it is possible to reliably estimate the camera pose despite cases with more than 99 percent outlier correspondences in city-scale models with several millions of 3D points.) <|cite_end|> <|cite_start|> (Reference: {Efficient Global 2D-3D Matching for Camera Localization in a Large-Scale 3D Map: Given an image of a street scene in a city, this paper develops a new method that can quickly and precisely pinpoint at which location (as well as viewing direction) the image was taken, against a pre-stored large-scale 3D point-cloud map of the city. We adopt the recently developed 2D-3D direct feature matching framework for this task [23,31,32,42–44]. This is a challenging task especially for large-scale problems. As the map size grows bigger, many 3D points in the wider geographical area can be visually very similar–or even identical–causing severe ambiguities in 2D-3D feature matching. The key is to quickly and unambiguously find the correct matches between a query image and the large 3D map. Existing methods solve this problem mainly via comparing individual features’ visual similarities in a local and per feature manner, thus only local solutions can be found, inadequate for large-scale applications. In this paper, we introduce a global method which harnesses global contextual information exhibited both within the query image and among all the 3D points in the map. This is achieved by a novel global ranking algorithm, applied to a Markov network built upon the 3D map, which takes account of not only visual similarities between individual 2D-3D matches, but also their global compatibilities (as measured by co-visibility) among all matching pairs found in the scene. Tests on standard benchmark datasets show that our method achieved both higher precision and comparable recall, compared with the state-of-the-art.) <|cite_end|> <|cite_start|> (Reference: Camera pose voting for large-scale image-based localization: Image-based localization approaches aim to determine the camera pose from which an image was taken. Finding correct 2D-3D correspondences between query image features and 3D points in the scene model becomes harder as the size of the model increases. Current state-of-the-art methods therefore combine elaborate matching schemes with camera pose estimation techniques that are able to handle large fractions of wrong matches. In this work we study the benefits and limitations of spatial verification compared to appearance-based filtering. We propose a voting-based pose estimation strategy that exhibits O(n) complexity in the number of matches and thus facilitates to consider much more matches than previous approaches - whose complexity grows at least quadratically. This new outlier rejection formulation enables us to evaluate pose estimation for 1-to-many matches and to surpass the state-of-the-art. At the same time, we show that using more matches does not automatically lead to a better performance.) <|cite_end|> <|cite_start|> (Reference: Are Large-Scale 3D models really necessary for accurate visual localization?: Accurate visual localization is a key technology for autonomous navigation. 3D structure-based methods employ 3D models of the scene to estimate the full 6DOF pose of a camera very accurately. However, constructing (and extending) large-scale 3D models is still a significant challenge. In contrast, 2D image retrieval-based methods only require a database of geo-tagged images, which is trivial to construct and to maintain. They are often considered inaccurate since they only approximate the positions of the cameras. Yet, the exact camera pose can theoretically be recovered when enough relevant database images are retrieved. In this paper, we demonstrate experimentally that large-scale 3D models are not strictly necessary for accurate visual localization. We create reference poses for a large and challenging urban dataset. Using these poses, we show that combining image-based methods with local reconstructions results in a pose accuracy similar to the state-of-the-art structure-based methods. Our results suggest that we might want to reconsider the current approach for accurate large-scale localization.) <|cite_end|>.
Yet, many robotics applications, especially self-driving cars <|cite_start|> (Reference: 3D Visual Perception for Self-Driving Cars using a Multi-Camera System: Calibration, Mapping, Localization, and Obstacle Detection: Cameras are a crucial exteroceptive sensor for self-driving cars as they are low-cost and small, provide appearance information about the environment, and work in various weather conditions. They can be used for multiple purposes such as visual navigation and obstacle detection. We can use a surround multi-camera system to cover the full 360-degree field-of-view around the car. In this way, we avoid blind spots which can otherwise lead to accidents. To minimize the number of cameras needed for surround perception, we utilize fisheye cameras. Consequently, standard vision pipelines for 3D mapping, visual localization, obstacle detection, etc. need to be adapted to take full advantage of the availability of multiple cameras rather than treat each camera individually. In addition, processing of fisheye images has to be supported. In this paper, we describe the camera calibration and subsequent processing pipeline for multi-fisheye-camera systems developed as part of the V-Charge project. This project seeks to enable automated valet parking for self-driving cars. Our pipeline is able to precisely calibrate multi-camera systems, build sparse 3D maps for visual navigation, visually localize the car with respect to these maps, generate accurate dense maps, as well as detect obstacles based on real-time depth map extraction.) <|cite_end|>, benefit from using a multi-camera systems that covers the full $360^\circ$ field-of-view (FoV) around the robot.
It has also been shown that cameras covering a larger FoV can be localized more accurately <|cite_start|> (Reference: Real-Time Self-Localization from Panoramic Images on Mobile Devices: Self-localization in large environments is a vital task for accurately registered information visualization in outdoor Augmented Reality (AR) applications. In this work, we present a system for self-localization on mobile phones using a GPS prior and an online-generated panoramic view of the user's environment. The approach is suitable for executing entirely on current generation mobile devices, such as smartphones. Parallel execution of online incremental panorama generation and accurate 6DOF pose estimation using 3D point reconstructions allows for real-time self-localization and registration in large-scale environments. The power of our approach is demonstrated in several experimental evaluations.) <|cite_end|>and that multi-camera systems significantly boost localization performance in challenging conditions <|cite_start|> (Reference: Benchmarking 6DOF Outdoor Visual Localization in Changing Conditions: Visual localization enables autonomous vehicles to navigate in their surroundings and augmented reality applications to link virtual to real worlds. Practical visual localization approaches need to be robust to a wide variety of viewing condition, including day-night changes, as well as weather and seasonal variations, while providing highly accurate 6 degree-of-freedom (6DOF) camera pose estimates. In this paper, we introduce the first benchmark datasets specifically designed for analyzing the impact of such factors on visual localization. Using carefully created ground truth poses for query images taken under a wide variety of conditions, we evaluate the impact of various factors on 6DOF camera pose estimation accuracy through extensive experiments with state-of-the-art localization approaches. Based on our results, we draw conclusions about the difficulty of different conditions, showing that long-term localization is far from solved, and propose promising avenues for future work, including sequence-based localization approaches and the need for better local features. Our benchmark is available at visuallocalization.net.) <|cite_end|>.
Existing work on multi-camera localization has mainly focused on stereo SLAM <|cite_start|> (Reference: Towards Robust Visual Odometry with a Multi-Camera System: We present a visual odometry (VO) algorithm for a multi-camera system and robust operation in challenging environments. Our algorithm consists of a pose tracker and a local mapper. The tracker estimates the current pose by minimizing photometric errors between the most recent keyframe and the current frame. The mapper initializes the depths of all sampled feature points using plane-sweeping stereo. To reduce pose drift, a sliding window optimizer is used to refine poses and structure jointly. Our formulation is flexible enough to support an arbitrary number of stereo cameras. We evaluate our algorithm thoroughly on five datasets. The datasets were captured in different conditions: daytime, night-time with near-infrared (NIR) illumination and nighttime without NIR illumination. Experimental results show that a multi-camera setup makes the VO more robust to challenging environments, especially night-time conditions, in which a single stereo configuration fails easily due to the lack of features.) <|cite_end|> <|cite_start|> (Reference: ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras: We present ORB-SLAM2 a complete SLAM system for monocular, stereo and RGB-D cameras, including map reuse, loop closing and relocalization capabilities. The system works in real-time on standard CPUs in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end based on bundle adjustment with monocular and stereo observations allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches to map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.) <|cite_end|> <|cite_start|> (Reference: Self-calibration and visual SLAM with a multi-camera system on a micro aerial vehicle: ) <|cite_end|>, camera calibration <|cite_start|> (Reference: Self-calibration and visual SLAM with a multi-camera system on a micro aerial vehicle: ) <|cite_end|> <|cite_start|> (Reference: Leveraging image-based localization for infrastructure-based calibration of a multi-camera rig: Most existing calibration methods for multi‐camera rigs are computationally expensive, use installations of known fiducial markers, and require expert supervision. We propose an alternative approach called infrastructure‐based calibration that is efficient, requires no modification of the infrastructure (or calibration area), and is completely unsupervised. In infrastructure‐based calibration, we use a map of a chosen calibration area and leverage image‐based localization to calibrate an arbitrary multi‐camera rig in near real‐time. Due to the use of a map, before we can apply infrastructure‐based calibration, we have to run a survey phase once to generate a map of the calibration area. In this survey phase, we use a survey vehicle equipped with a multi‐camera rig and a calibrated odometry system, and self‐calibration based on simultaneous localization and mapping to build the map that is based on natural features. The use of the calibrated odometry system ensures that the metric scale of the map is accurate. Our infrastructure‐based calibration method does not assume an overlapping field of view between any two cameras, and it does not require an initial guess of any extrinsic parameter. Through extensive field tests on various ground vehicles in a variety of environments, we demonstrate the accuracy and repeatability of the infrastructure‐based calibration method for calibration of a multi‐camera rig. The code for our infrastructure‐based calibration method is publicly available as part of the CamOdoCal library at https://github.com/hengli/camodocal.) <|cite_end|>, and camera pose estimation <|cite_start|> (Reference: Minimal Solvers for Generalized Pose and Scale Estimation from Two Rays and One Point: ) <|cite_end|> <|cite_start|> (Reference: Minimal solutions for the multi-camera pose estimation problem: In this paper, we propose a novel formulation to solve the pose estimation problem of a calibrated multi-camera system. The non-central rays that pass through the 3D world points and multi-camera system are elegantly represented as Plücker lines. This allows us to solve for the depth of the points along the Plücker lines with a minimal set of three-point correspondences. We show that the minimal solution for the depth of the points along the Plücker lines is an eight-degree polynomial that gives up to eight real solutions. The coordinates of the 3D world points in the multi-camera frame are computed from the known depths. Consequently, the pose of the multi-camera system, i.e. the rigid transformation between the world and multi-camera frames can be obtained from absolute orientation. We also derive a closed-form minimal solution for the absolute orientation. This removes the need for the computationally expensive singular value decompositions during the evaluations of the possible solutions for the depths. We identify the correct solution and do robust estimation with RANSAC. Finally, the solution is further refined by including all the inlier correspondences in a nonlinear refinement step. We verify our approach by showing comparisons with other existing approaches and results from large-scale real-world datasets.) <|cite_end|> <|cite_start|> (Reference: Large Scale SfM with the Distributed Camera Model: We introduce the distributed camera model, a novel model for Structure-from-Motion (SfM). This model describes image observations in terms of light rays with ray origins and directions rather than pixels. As such, the proposed model is capable of describing a single camera or multiple cameras simultaneously as the collection of all light rays observed. We show how the distributed camera model is a generalization of the standard camera model and describe a general formulation and solution to the absolute camera pose problem that works for standard or distributed cameras. The proposed method computes a solution that is up to 8 times more efficient and robust to rotation singularities in comparison with gDLS. Finally, this method is used in an novel large-scale incremental SfM pipeline where distributed cameras are accurately and robustly merged together. This pipeline is a direct generalization of traditional incremental SfM; however, instead of incrementally adding one camera at a time to grow the reconstruction the reconstruction is grown by adding a distributed camera. Our pipeline produces highly accurate reconstructions efficiently by avoiding the need for many bundle adjustment iterations and is capable of computing a 3D model of Rome from over 15,000 images in just 22 minutes.) <|cite_end|> <|cite_start|> (Reference: A minimal solution to the generalized pose-and-scale problem: We propose a novel solution to the generalized camera pose problem which includes the internal scale of the generalized camera as an unknown parameter. This further generalization of the well-known absolute camera pose problem has applications in multi-frame loop closure. While a well-calibrated camera rig has a fixed and known scale, camera trajectories produced by monocular motion estimation necessarily lack a scale estimate. Thus, when performing loop closure in monocular visual odometry, or registering separate structure-from-motion reconstructions, we must estimate a seven degree-of-freedom similarity transform from corresponding observations. Existing approaches solve this problem, in specialized configurations, by aligning 3D triangulated points or individual camera pose estimates. Our approach handles general configurations of rays and points and directly estimates the full similarity transformation from the 2D-3D correspondences. Four correspondences are needed in the minimal case, which has eight possible solutions. The minimal solver can be used in a hypothesize-and-test architecture for robust transformation estimation. Our solver also produces a least-squares estimate in the overdetermined case. The approach is evaluated experimentally on synthetic and real datasets, and is shown to produce higher accuracy solutions to multi-frame loop closure than existing approaches.) <|cite_end|>.
The latter two types of approaches model multi-camera systems as a generalized camera <|cite_start|> (Reference: Using Many Cameras as One: We illustrate how to consider a network of cameras as a single generalized camera in a framework proposed by Nayar (2001). We derive the discrete structure from motion equations for generalized cameras, and illustrate the corollaries to epipolar geometry. This formal mechanism allows one to use a network of cameras as if they were a single imaging device, even when they do not share a common center of projection. Furthermore, an analysis of structure from motion algorithms for this imaging model gives constraints on the optimal design of panoramic imaging systems constructed from multiple cameras.) <|cite_end|>, \ie, a camera with multiple centers of projection, to derive (minimal) solvers for pose estimation.
Yet, one central aspect of multi-camera localization has received little attention:
Using multiple images leads to more features that need to be considered during feature matching and thus to significantly longer run-times.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{figures/one_north_frame_example_compact}
\caption{Visual localization based on 2D-3D matching against a 3D scene model for a multi-camera system. We use one fisheye camera mounted on each side of a car. We show the correct matches (red), outlier 2D-3D matches (dark blue), outlier 3D-2D matches (light blue), and unmatched image features (green).}
\label{fig:one_north_frame_example}
\end{figure}
This paper aims to close this gap in the literature by focusing on efficient 2D-3D matching for multi-camera systems.
To this end, we make the following main contributions:
1) We develop a prioritized descriptor matching scheme for multi-camera systems.
Our strategy is based on Active Search <|cite_start|> (Reference: Efficient \& Effective Prioritized Matching for Large-Scale Image-Based Localization: Accurately determining the position and orientation from which an image was taken, i.e., computing the camera pose, is a fundamental step in many Computer Vision applications. The pose can be recovered from 2D-3D matches between 2D image positions and points in a 3D model of the scene. Recent advances in Structure-from-Motion allow us to reconstruct large scenes and thus create the need for image-based localization methods that efficiently handle large-scale 3D models while still being effective, i.e., while localizing as many images as possible. This paper presents an approach for large scale image-based localization that is both efficient and effective. At the core of our approach is a novel prioritized matching step that enables us to first consider features more likely to yield 2D-to-3D matches and to terminate the correspondence search as soon as enough matches have been found. Matches initially lost due to quantization are efficiently recovered by integrating 3D-to-2D search. We show how visibility information from the reconstruction process can be used to improve the efficiency of our approach. We evaluate the performance of our method through extensive experiments and demonstrate that it offers the best combination of efficiency and effectiveness among current state-of-the-art approaches for localization.) <|cite_end|>, an efficient prioritization scheme developed for monocular cameras.
We show that a fast variant of Active Search, which leads to unstable pose estimates for a single image, is very well suited for multi-camera systems.
2) We interleave prioritized matching with camera pose estimation.
In contrast to standard schemes, which terminate search once a fixed number of matches has been found, our approach terminates as soon as sufficiently many geometrically consistent matches have been found.
3) Inspired by approaches for geometric outlier filtering <|cite_start|> (Reference: City-scale Localization for Cameras with Known Vertical Direction: We consider the problem of localizing a novel image in a large 3D model, given that the gravitational vector is known. In principle, this is just an instance of camera pose estimation, but the scale of the problem introduces some interesting challenges. Most importantly, it makes the correspondence problem very difficult so there will often be a significant number of outliers to handle. To tackle this problem, we use recent theoretical as well as technical advances. Many modern cameras and phones have gravitational sensors that allow us to reduce the search space. Further, there are new techniques to efficiently and reliably deal with extreme rates of outliers. We extend these methods to camera pose estimation by using accurate approximations and fast polynomial solvers. Experimental results are given demonstrating that it is possible to reliably estimate the camera pose despite cases with more than 99 percent outlier correspondences in city-scale models with several millions of 3D points.) <|cite_end|> <|cite_start|> (Reference: Camera pose voting for large-scale image-based localization: Image-based localization approaches aim to determine the camera pose from which an image was taken. Finding correct 2D-3D correspondences between query image features and 3D points in the scene model becomes harder as the size of the model increases. Current state-of-the-art methods therefore combine elaborate matching schemes with camera pose estimation techniques that are able to handle large fractions of wrong matches. In this work we study the benefits and limitations of spatial verification compared to appearance-based filtering. We propose a voting-based pose estimation strategy that exhibits O(n) complexity in the number of matches and thus facilitates to consider much more matches than previous approaches - whose complexity grows at least quadratically. This new outlier rejection formulation enables us to evaluate pose estimation for 1-to-many matches and to surpass the state-of-the-art. At the same time, we show that using more matches does not automatically lead to a better performance.) <|cite_end|>, we develop an efficient geometric verification step that can be used to integrate potential pose priors.
This allows us to avoid comparing descriptors for geometrically implausible matches, which can make our search both more efficient and robust.
These later two contributions are not restricted to the multi-camera case but also applicable in the monocular scenario.
4) We show how to combine our approach with a VIO pipeline, enabling our system to provide accurate, drift-free pose estimates in real-time on a car.
Related Work
\label{sec:related_work}
In the following, we review related work from the areas of visual localization and multi-camera pose estimation.
\PAR{Efficient visual localization} approaches aim at accelerating the localization process <|cite_start|> (Reference: Location Recognition Using Prioritized Feature Matching: ) <|cite_end|> <|cite_start|> (Reference: Efficient \& Effective Prioritized Matching for Large-Scale Image-Based Localization: Accurately determining the position and orientation from which an image was taken, i.e., computing the camera pose, is a fundamental step in many Computer Vision applications. The pose can be recovered from 2D-3D matches between 2D image positions and points in a 3D model of the scene. Recent advances in Structure-from-Motion allow us to reconstruct large scenes and thus create the need for image-based localization methods that efficiently handle large-scale 3D models while still being effective, i.e., while localizing as many images as possible. This paper presents an approach for large scale image-based localization that is both efficient and effective. At the core of our approach is a novel prioritized matching step that enables us to first consider features more likely to yield 2D-to-3D matches and to terminate the correspondence search as soon as enough matches have been found. Matches initially lost due to quantization are efficiently recovered by integrating 3D-to-2D search. We show how visibility information from the reconstruction process can be used to improve the efficiency of our approach. We evaluate the performance of our method through extensive experiments and demonstrate that it offers the best combination of efficiency and effectiveness among current state-of-the-art approaches for localization.) <|cite_end|> <|cite_start|> (Reference: Learning Less is More - 6D Camera Localization via 3D Surface Regression: Popular research areas like autonomous driving and augmented reality have renewed the interest in image-based camera localization. In this work, we address the task of predicting the 6D camera pose from a single RGB image in a given 3D environment. With the advent of neural networks, previous works have either learned the entire camera localization process, or multiple components of a camera localization pipeline. Our key contribution is to demonstrate and explain that learning a single component of this pipeline is sufficient. This component is a fully convolutional neural network for densely regressing so-called scene coordinates, defining the correspondence between the input image and the 3D scene space. The neural network is prepended to a new end-to-end trainable pipeline. Our system is efficient, highly accurate, robust in training, and exhibits outstanding generalization capabilities. It exceeds state-of-the-art consistently on indoor and outdoor datasets. Interestingly, our approach surpasses existing techniques even without utilizing a 3D model of the scene during training, since the network is able to discover 3D scene geometry automatically, solely from single-view constraints.) <|cite_end|> <|cite_start|> (Reference: On-the-Fly Adaptation of Regression Forests for Online Camera Relocalisation: Camera relocalisation is an important problem in computer vision, with applications in simultaneous localisation and mapping, virtual/augmented reality and navigation. Common techniques either match the current image against keyframes with known poses coming from a tracker, or establish 2D-to-3D correspondences between keypoints in the current image and points in the scene in order to estimate the camera pose. Recently, regression forests have become a popular alternative to establish such correspondences. They achieve accurate results, but must be trained offline on the target scene, preventing relocalisation in new environments. In this paper, we show how to circumvent this limitation by adapting a pre-trained forest to a new scene on the fly. Our adapted forests achieve relocalisation performance that is on par with that of offline forests, and our approach runs in under 150ms, making it desirable for real-time systems that require online relocalisation.) <|cite_end|> <|cite_start|> (Reference: Random forests versus Neural Networks — What's best for camera localization?: This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step. Random Forests (RFs) are typically used. On the other hand. Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset [1]. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.) <|cite_end|> <|cite_start|> (Reference: Geometric Loss Functions for Camera Pose Regression with Deep Learning: Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNet's performance across datasets ranging from indoor rooms to a small city.) <|cite_end|> <|cite_start|> (Reference: Image-based localization using LSTMs for structured feature correlation: In this work we propose a new CNN+LSTM architecture for camera pose regression for indoor and outdoor scenes. CNNs allow us to learn suitable feature representations for localization that are robust against motion blur and illumination changes. We make use of LSTM units on the CNN output, which play the role of a structured dimensionality reduction on the feature vector, leading to drastic improvements in localization performance. We provide extensive quantitative comparison of CNN-based and SIFT-based localization methods, showing the weaknesses and strengths of each. Furthermore, we present a new large-scale indoor dataset with accurate ground truth from a laser scanner. Experimental results on both indoor and outdoor public datasets show our method outperforms existing deep architectures, and can localize images in hard conditions, e.g., in the presence of mostly textureless surfaces, where classic SIFT-based methods fail.) <|cite_end|> <|cite_start|> (Reference: Deep Auxiliary Learning for Visual Localization and Odometry: Localization is an indispensable component of a robot's autonomy stack that enables it to determine where it is in the environment, essentially making it a precursor for any action execution or planning. Although convolutional neural networks have shown promising results for visual localization, they are still grossly outperformed by state-of-the-art local feature-based techniques. In this work, we propose VLocNet, a new convolutional neural network architecture for 6-DoF global pose regression and odometry estimation from consecutive monocular images. Our multitask model incorporates hard parameter sharing, thus being compact and enabling real-time inference, in addition to being end-to-end trainable. We propose a novel loss function that utilizes auxiliary learning to leverage relative pose information during training, thereby constraining the search space to obtain consistent pose estimates. We evaluate our proposed VLocNet on indoor as well as outdoor datasets and show that even our single task model exceeds the performance of state-of-the-art deep architectures for global localization, while achieving competitive performance for visual odometry estimation. Furthermore, we present extensive experimental evaluations utilizing our proposed Geometric Consistency Loss that show the effectiveness of multitask learning and demonstrate that our model is the first deep learning technique to be on par with, and in some cases outperforms state-of-the-art SIFT-based approaches.) <|cite_end|>.
Most related to our approach are explicit methods based on prioritized matching <|cite_start|> (Reference: Location Recognition Using Prioritized Feature Matching: ) <|cite_end|> <|cite_start|> (Reference: Efficient \& Effective Prioritized Matching for Large-Scale Image-Based Localization: Accurately determining the position and orientation from which an image was taken, i.e., computing the camera pose, is a fundamental step in many Computer Vision applications. The pose can be recovered from 2D-3D matches between 2D image positions and points in a 3D model of the scene. Recent advances in Structure-from-Motion allow us to reconstruct large scenes and thus create the need for image-based localization methods that efficiently handle large-scale 3D models while still being effective, i.e., while localizing as many images as possible. This paper presents an approach for large scale image-based localization that is both efficient and effective. At the core of our approach is a novel prioritized matching step that enables us to first consider features more likely to yield 2D-to-3D matches and to terminate the correspondence search as soon as enough matches have been found. Matches initially lost due to quantization are efficiently recovered by integrating 3D-to-2D search. We show how visibility information from the reconstruction process can be used to improve the efficiency of our approach. We evaluate the performance of our method through extensive experiments and demonstrate that it offers the best combination of efficiency and effectiveness among current state-of-the-art approaches for localization.) <|cite_end|>.
These methods aim at designing a prioritization function such that features that are more likely to yield 2D-3D matches are considered first.
Once a fixed number of correspondences has been found, matching is terminated and RANSAC-based pose estimation is performed.
In this paper, we build upon Active Search <|cite_start|> (Reference: Efficient \& Effective Prioritized Matching for Large-Scale Image-Based Localization: Accurately determining the position and orientation from which an image was taken, i.e., computing the camera pose, is a fundamental step in many Computer Vision applications. The pose can be recovered from 2D-3D matches between 2D image positions and points in a 3D model of the scene. Recent advances in Structure-from-Motion allow us to reconstruct large scenes and thus create the need for image-based localization methods that efficiently handle large-scale 3D models while still being effective, i.e., while localizing as many images as possible. This paper presents an approach for large scale image-based localization that is both efficient and effective. At the core of our approach is a novel prioritized matching step that enables us to first consider features more likely to yield 2D-to-3D matches and to terminate the correspondence search as soon as enough matches have been found. Matches initially lost due to quantization are efficiently recovered by integrating 3D-to-2D search. We show how visibility information from the reconstruction process can be used to improve the efficiency of our approach. We evaluate the performance of our method through extensive experiments and demonstrate that it offers the best combination of efficiency and effectiveness among current state-of-the-art approaches for localization.) <|cite_end|>.
We show that a variant of it that is more efficient, but leads to inferior results for monocular images, is actually well-suited for multi-camera systems.
We adapt the prioritization scheme to encourage distributing matches over many images in the camera system.
We also propose an adaptive criterion that terminates matching once a certain number of correct matches is found rather than stopping search after finding a fixed number of (possibly incorrect) correspondences.
\PAR{Scalable visual localization.}
In larger or more complex scenes, which are often characterized by more ambiguous scene elements, it is hard to distinguish between correct and incorrect matches based on descriptor comparisons alone.
State-of-the-art methods for scalable localization thus relax the matching criteria and perform geometric reasoning to handle the resulting large amounts of wrong matches <|cite_start|> (Reference: City-scale Localization for Cameras with Known Vertical Direction: We consider the problem of localizing a novel image in a large 3D model, given that the gravitational vector is known. In principle, this is just an instance of camera pose estimation, but the scale of the problem introduces some interesting challenges. Most importantly, it makes the correspondence problem very difficult so there will often be a significant number of outliers to handle. To tackle this problem, we use recent theoretical as well as technical advances. Many modern cameras and phones have gravitational sensors that allow us to reduce the search space. Further, there are new techniques to efficiently and reliably deal with extreme rates of outliers. We extend these methods to camera pose estimation by using accurate approximations and fast polynomial solvers. Experimental results are given demonstrating that it is possible to reliably estimate the camera pose despite cases with more than 99 percent outlier correspondences in city-scale models with several millions of 3D points.) <|cite_end|> <|cite_start|> (Reference: Camera pose voting for large-scale image-based localization: Image-based localization approaches aim to determine the camera pose from which an image was taken. Finding correct 2D-3D correspondences between query image features and 3D points in the scene model becomes harder as the size of the model increases. Current state-of-the-art methods therefore combine elaborate matching schemes with camera pose estimation techniques that are able to handle large fractions of wrong matches. In this work we study the benefits and limitations of spatial verification compared to appearance-based filtering. We propose a voting-based pose estimation strategy that exhibits O(n) complexity in the number of matches and thus facilitates to consider much more matches than previous approaches - whose complexity grows at least quadratically. This new outlier rejection formulation enables us to evaluate pose estimation for 1-to-many matches and to surpass the state-of-the-art. At the same time, we show that using more matches does not automatically lead to a better performance.) <|cite_end|> <|cite_start|> (Reference: Toroidal Constraints for Two-Point Localization under High Outlier Ratios: Localizing a query image against a 3D model at large scale is a hard problem, since 2D-3D matches become more and more ambiguous as the model size increases. This creates a need for pose estimation strategies that can handle very low inlier ratios. In this paper, we draw new insights on the geometric information available from the 2D-3D matching process. As modern descriptors are not invariant against large variations in viewpoint, we are able to find the rays in space used to triangulate a given point that are closest to a query descriptor. It is well known that two correspondences constrain the camera to lie on the surface of a torus. Adding the knowledge of direction of triangulation, we are able to approximate the position of the camera from two matches alone. We derive a geometric solver that can compute this position in under 1 microsecond. Using this solver, we propose a simple yet powerful outlier filter which scales quadratically in the number of matches. We validate the accuracy of our solver and demonstrate the usefulness of our method in real world settings.) <|cite_end|> <|cite_start|> (Reference: Outlier Rejection for Absolute Pose Estimation with Known Orientation: Estimating the pose of a camera is a core problem in many geometric vision applications. While there has been much progress in the last two decades, the main difficulty is still dealing with data contaminated by outliers. For many scenes, e.g. with poor lightning conditions or repetitive textures, it is common that most of the correspondences are outliers. For real applications it is therefore essential to have robust estimation methods. In this paper we present an outlier rejection method for absolute pose estimation. We focus on the special case when the orientation of the camera is known. The problem is solved by projecting to a lower dimensional subspace where we are able to efficiently compute upper bounds on the maximum number of inliers. The method guarantees that only correspondences which cannot belong to an optimal pose are removed. In a number of challenging experiments we evaluate our method on both real and synthetic data and show improved performance compared to competing methods.) <|cite_end|>.
As a result, they are often too slow for real-time processing.
In this paper, we propose a geometric filter based on a potentially available pose prior, \eg, from VIO-based camera tracking or via a GPS sensor.
We show that this filter can be implemented very efficiently, allowing us to perform it before descriptor matching.
This leads to faster matching times, but also makes matching more robust as we can again relax the matching criteria.
\PAR{Learning-based methods} integrate machine learning into the localization process.
This is usually done by either learning the 2D-3D matching stage <|cite_start|> (Reference: Learning Less is More - 6D Camera Localization via 3D Surface Regression: Popular research areas like autonomous driving and augmented reality have renewed the interest in image-based camera localization. In this work, we address the task of predicting the 6D camera pose from a single RGB image in a given 3D environment. With the advent of neural networks, previous works have either learned the entire camera localization process, or multiple components of a camera localization pipeline. Our key contribution is to demonstrate and explain that learning a single component of this pipeline is sufficient. This component is a fully convolutional neural network for densely regressing so-called scene coordinates, defining the correspondence between the input image and the 3D scene space. The neural network is prepended to a new end-to-end trainable pipeline. Our system is efficient, highly accurate, robust in training, and exhibits outstanding generalization capabilities. It exceeds state-of-the-art consistently on indoor and outdoor datasets. Interestingly, our approach surpasses existing techniques even without utilizing a 3D model of the scene during training, since the network is able to discover 3D scene geometry automatically, solely from single-view constraints.) <|cite_end|> <|cite_start|> (Reference: On-the-Fly Adaptation of Regression Forests for Online Camera Relocalisation: Camera relocalisation is an important problem in computer vision, with applications in simultaneous localisation and mapping, virtual/augmented reality and navigation. Common techniques either match the current image against keyframes with known poses coming from a tracker, or establish 2D-to-3D correspondences between keypoints in the current image and points in the scene in order to estimate the camera pose. Recently, regression forests have become a popular alternative to establish such correspondences. They achieve accurate results, but must be trained offline on the target scene, preventing relocalisation in new environments. In this paper, we show how to circumvent this limitation by adapting a pre-trained forest to a new scene on the fly. Our adapted forests achieve relocalisation performance that is on par with that of offline forests, and our approach runs in under 150ms, making it desirable for real-time systems that require online relocalisation.) <|cite_end|> <|cite_start|> (Reference: Random forests versus Neural Networks — What's best for camera localization?: This work addresses the task of camera localization in a known 3D scene given a single input RGB image. State-of-the-art approaches accomplish this in two steps: firstly, regressing for every pixel in the image its 3D scene coordinate and subsequently, using these coordinates to estimate the final 6D camera pose via RANSAC. To solve the first step. Random Forests (RFs) are typically used. On the other hand. Neural Networks (NNs) reign in many dense regression tasks, but are not test-time efficient. We ask the question: which of the two is best for camera localization? To address this, we make two method contributions: (1) a test-time efficient NN architecture which we term a ForestNet that is derived and initialized from a RF, and (2) a new fully-differentiable robust averaging technique for regression ensembles which can be trained end-to-end with a NN. Our experimental findings show that for scene coordinate regression, traditional NN architectures are superior to test-time efficient RFs and ForestNets, however, this does not translate to final 6D camera pose accuracy where RFs and ForestNets perform slightly better. To summarize, our best method, a ForestNet with a robust average, which has an equivalent fast and lightweight RF, improves over the state-of-the-art for camera localization on the 7-Scenes dataset [1]. While this work focuses on scene coordinate regression for camera localization, our innovations may also be applied to other continuous regression tasks.) <|cite_end|>or by directly regressing the camera pose <|cite_start|> (Reference: Geometric Loss Functions for Camera Pose Regression with Deep Learning: Deep learning has shown to be effective for robust and real-time monocular image relocalisation. In particular, PoseNet is a deep convolutional neural network which learns to regress the 6-DOF camera pose from a single image. It learns to localize using high level features and is robust to difficult lighting, motion blur and unknown camera intrinsics, where point based SIFT registration fails. However, it was trained using a naive loss function, with hyper-parameters which require expensive tuning. In this paper, we give the problem a more fundamental theoretical treatment. We explore a number of novel loss functions for learning camera pose which are based on geometry and scene reprojection error. Additionally we show how to automatically learn an optimal weighting to simultaneously regress position and orientation. By leveraging geometry, we demonstrate that our technique significantly improves PoseNet's performance across datasets ranging from indoor rooms to a small city.) <|cite_end|> <|cite_start|> (Reference: Image-based localization using LSTMs for structured feature correlation: In this work we propose a new CNN+LSTM architecture for camera pose regression for indoor and outdoor scenes. CNNs allow us to learn suitable feature representations for localization that are robust against motion blur and illumination changes. We make use of LSTM units on the CNN output, which play the role of a structured dimensionality reduction on the feature vector, leading to drastic improvements in localization performance. We provide extensive quantitative comparison of CNN-based and SIFT-based localization methods, showing the weaknesses and strengths of each. Furthermore, we present a new large-scale indoor dataset with accurate ground truth from a laser scanner. Experimental results on both indoor and outdoor public datasets show our method outperforms existing deep architectures, and can localize images in hard conditions, e.g., in the presence of mostly textureless surfaces, where classic SIFT-based methods fail.) <|cite_end|> <|cite_start|> (Reference: Deep Auxiliary Learning for Visual Localization and Odometry: Localization is an indispensable component of a robot's autonomy stack that enables it to determine where it is in the environment, essentially making it a precursor for any action execution or planning. Although convolutional neural networks have shown promising results for visual localization, they are still grossly outperformed by state-of-the-art local feature-based techniques. In this work, we propose VLocNet, a new convolutional neural network architecture for 6-DoF global pose regression and odometry estimation from consecutive monocular images. Our multitask model incorporates hard parameter sharing, thus being compact and enabling real-time inference, in addition to being end-to-end trainable. We propose a novel loss function that utilizes auxiliary learning to leverage relative pose information during training, thereby constraining the search space to obtain consistent pose estimates. We evaluate our proposed VLocNet on indoor as well as outdoor datasets and show that even our single task model exceeds the performance of state-of-the-art deep architectures for global localization, while achieving competitive performance for visual odometry estimation. Furthermore, we present extensive experimental evaluations utilizing our proposed Geometric Consistency Loss that show the effectiveness of multitask learning and demonstrate that our model is the first deep learning technique to be on par with, and in some cases outperforms state-of-the-art SIFT-based approaches.) <|cite_end|>.
However, recent work shows that these methods are either less accurate than feature-based methods such as the one presented in this paper <|cite_start|> (Reference: Image-based localization using LSTMs for structured feature correlation: In this work we propose a new CNN+LSTM architecture for camera pose regression for indoor and outdoor scenes. CNNs allow us to learn suitable feature representations for localization that are robust against motion blur and illumination changes. We make use of LSTM units on the CNN output, which play the role of a structured dimensionality reduction on the feature vector, leading to drastic improvements in localization performance. We provide extensive quantitative comparison of CNN-based and SIFT-based localization methods, showing the weaknesses and strengths of each. Furthermore, we present a new large-scale indoor dataset with accurate ground truth from a laser scanner. Experimental results on both indoor and outdoor public datasets show our method outperforms existing deep architectures, and can localize images in hard conditions, e.g., in the presence of mostly textureless surfaces, where classic SIFT-based methods fail.) <|cite_end|>or do not scale to larger scenes | [
"<|reference_start|> City-scale Localization for Cameras with Known Vertical Direction: We consider the problem of localizing a novel image in a large 3D model, given that the gravitational vector is known. In principle, this is just an instance of camera pose estimation, but the scale of the problem introduces some interesting challenges. Most importantly, it makes the correspondence problem very difficult so there will often be a significant number of outliers to handle. To tackle this problem, we use recent theoretical as well as technical advances. Many modern cameras and phones have gravitational sensors that allow us to reduce the search space. Further, there are new techniques to efficiently and reliably deal with extreme rates of outliers. We extend these methods to camera pose estimation by using accurate approximations and fast polynomial solvers. Experimental results are given demonstrating that it is possible to reliably estimate the camera pose despite cases with more than 99 percent outlier correspondences in city-scale models with several millions of 3D points. <|reference_end|>",
"<|reference_start|> {Parallel Tracking and Mapping for Small AR Workspaces: This paper presents a method of estimating camera pose in an unknown scene. While this has previously been attempted by adapting SLAM algorithms developed for robotic exploration, we propose a system specifically designed to track a hand-held camera in a small AR workspace. We propose to split tracking and mapping into two separate tasks, processed in parallel threads on a dual-core computer: one thread deals with the task of robustly tracking erratic hand-held motion, while the other produces a 3D map of point features from previously observed video frames. This allows the use of computationally expensive batch optimisation techniques not usually associated with real-time operation: The result is a system that produces detailed maps with thousands of landmarks which can be tracked at frame-rate, with an accuracy and robustness rivalling that of state-of-the-art model-based systems. <|reference_end|>",
"<|reference_start|> Image-based localization using LSTMs for structured feature correlation: In this work we propose a new CNN+LSTM architecture for camera pose regression for indoor and outdoor scenes. CNNs allow us to learn suitable feature representations for localization that are robust against motion blur and illumination changes. We make use of LSTM units on the CNN output, which play the role of a structured dimensionality reduction on the feature vector, leading to drastic improvements in localization performance. We provide extensive quantitative comparison of CNN-based and SIFT-based localization methods, showing the weaknesses and strengths of each. Furthermore, we present a new large-scale indoor dataset with accurate ground truth from a laser scanner. Experimental results on both indoor and outdoor public datasets show our method outperforms existing deep architectures, and can localize images in hard conditions, e.g., in the presence of mostly textureless surfaces, where classic SIFT-based methods fail. <|reference_end|>",
"<|reference_start|> Deep Auxiliary Learning for Visual Localization and Odometry: Localization is an indispensable component of a robot's autonomy stack that enables it to determine where it is in the environment, essentially making it a precursor for any action execution or planning. Although convolutional neural networks have shown promising results for visual localization, they are still grossly outperformed by state-of-the-art local feature-based techniques. In this work, we propose VLocNet, a new convolutional neural network architecture for 6-DoF global pose regression and odometry estimation from consecutive monocular images. Our multitask model incorporates hard parameter sharing, thus being compact and enabling real-time inference, in addition to being end-to-end trainable. We propose a novel loss function that utilizes auxiliary learning to leverage relative pose information during training, thereby constraining the search space to obtain consistent pose estimates. We evaluate our proposed VLocNet on indoor as well as outdoor datasets and show that even our single task model exceeds the performance of state-of-the-art deep architectures for global localization, while achieving competitive performance for visual odometry estimation. Furthermore, we present extensive experimental evaluations utilizing our proposed Geometric Consistency Loss that show the effectiveness of multitask learning and demonstrate that our model is the first deep learning technique to be on par with, and in some cases outperforms state-of-the-art SIFT-based approaches. <|reference_end|>"
] | [
9,
24,
67,
68
] | {"<|cite_1|>": "arxiv-133322", "<|cite_2|>": "arxiv-72495", "<|cite_3|>": "ss-783932", "<|cite_4|>": "arxiv-133322", "<|multi_cite_5_1|>": "ss-1034990", "<|multi_cite_5_2|>": "ss-1062661", "<|multi_cite_6_1|>": "ss-1347631", "<|multi_cite_6_2|>": "ss-1034990", "<|multi_cite_6_3|>": "ss-722643", "<|multi_cite_6_4|>": "ss-1080936", "<|multi_cite_6_5|>": "ss-1513134", "<|multi_cite_7_1|>": "ss-1523190", "<|multi_cite_7_2|>": "ss-1258200", "<|multi_cite_7_3|>": "ss-1372664", "<|cite_8|>": "ss-772411", "<|multi_cite_9_1|>": "arxiv-141469", "<|multi_cite_9_2|>": "arxiv-116210", "<|multi_cite_9_3|>": "ss-724919", "<|multi_cite_9_4|>": "ss-1268380", "<|multi_cite_10_1|>": "arxiv-116210", "<|multi_cite_10_2|>": "arxiv-141469", "<|multi_cite_11_1|>": "arxiv-141469", "<|multi_cite_11_2|>": "arxiv-143226", "<|multi_cite_12_1|>": "ss-1355833", "<|multi_cite_12_2|>": "ss-1427815", "<|multi_cite_12_3|>": "ss-1034990", "<|multi_cite_13_1|>": "ss-1355835", "<|multi_cite_13_2|>": "ss-722643", "<|multi_cite_14_1|>": "ss-1080936", "<|multi_cite_14_2|>": "ss-1092052", "<|multi_cite_14_3|>": "ss-1513134", "<|multi_cite_14_4|>": "ss-1236945", "<|multi_cite_15_1|>": "arxiv-133322", "<|cite_16|>": "ss-1638398", "<|cite_17|>": "arxiv-130571", "<|multi_cite_18_1|>": "ss-1259059", "<|multi_cite_18_2|>": "arxiv-108294", "<|multi_cite_18_3|>": "ss-2214615", "<|multi_cite_19_1|>": "ss-2214615", "<|multi_cite_19_2|>": "ss-1364377", "<|multi_cite_20_1|>": "ss-1300436", "<|multi_cite_20_2|>": "ss-1372664", "<|multi_cite_20_3|>": "arxiv-102056", "<|multi_cite_20_4|>": "ss-1380751", "<|cite_21|>": "ss-2303795", "<|cite_22|>": "ss-722643", "<|multi_cite_23_1|>": "ss-1080936", "<|multi_cite_23_2|>": "ss-1513134", "<|multi_cite_24_1|>": "ss-1355835", "<|multi_cite_24_2|>": "ss-722643", "<|multi_cite_24_3|>": "arxiv-141469", "<|multi_cite_24_4|>": "arxiv-116210", "<|multi_cite_24_5|>": "ss-724919", "<|multi_cite_24_6|>": "arxiv-120653", "<|multi_cite_24_7|>": "arxiv-110895", "<|multi_cite_24_8|>": "arxiv-151070", "<|multi_cite_25_1|>": "ss-1355835", "<|multi_cite_25_2|>": "ss-722643", "<|cite_26|>": "ss-722643", "<|multi_cite_27_1|>": "ss-1080936", "<|multi_cite_27_2|>": "ss-1513134", "<|multi_cite_27_3|>": "ss-2293598", "<|multi_cite_27_4|>": "ss-1657073", "<|multi_cite_28_1|>": "arxiv-141469", "<|multi_cite_28_2|>": "arxiv-116210", "<|multi_cite_28_3|>": "ss-724919", "<|multi_cite_29_1|>": "arxiv-120653", "<|multi_cite_29_2|>": "arxiv-110895", "<|multi_cite_29_3|>": "arxiv-151070", "<|cite_30|>": "arxiv-110895", "<|multi_cite_31_1|>": "arxiv-141469", "<|multi_cite_31_2|>": "arxiv-120653", "<|multi_cite_31_3|>": "arxiv-143226", "<|multi_cite_31_4|>": "arxiv-130571", "<|cite_32|>": "ss-2303795", "<|multi_cite_33_1|>": "ss-1300436", "<|multi_cite_33_2|>": "ss-1372664", "<|multi_cite_33_3|>": "arxiv-102056", "<|multi_cite_33_4|>": "ss-1380751", "<|cite_34|>": "arxiv-130571"} |
2006.14352 | <|paper_start|> Title: HARMer: Cyber-attacks Automation and Evaluation
Abstract: HARMer: Cyber-attacks Automation and Evaluation: With the increasing growth of cyber-attack incidences, it is important to develop innovative and effective techniques to assess and defend networked systems against cyber attacks. One of the well-known techniques for this is performing penetration testing which is carried by a group of security professionals (i.e, red team). Penetration testing is also known to be effective to find existing and new vulnerabilities, however, the quality of security assessment can be depending on the quality of the red team members and their time and devotion to the penetration testing. In this paper, we propose a novel automation framework for cyber-attacks generation named `HARMer' to address the challenges with respect to manual attack execution by the red team. Our novel proposed framework, design, and implementation is based on a scalable graphical security model called Hierarchical Attack Representation Model (HARM). (1) We propose the requirements and the key phases for the automation framework. (2) We propose security metrics-based attack planning strategies along with their algorithms. (3) We conduct experiments in a real enterprise network and Amazon Web Services. The results show how the different phases of the framework interact to model the attackers' operations. This framework will allow security administrators to automatically assess the impact of various threats and attacks in an automated manner.
Introduction
\label{sec:introduction}
Despite the billions of dollars spent on the prevention of
cyber-attacks, cyber-criminals have continued to cause devastating financial losses to businesses, enterprises, the governments, \textit{etc}. In 2018, the CSIS (Center for Strategic and International Studies) in partnership with McAfee has estimated the worldwide costs of cyber-attacks at about \$600 billion and it is predicted to cost the world \$6 trillion annually by 2021. Therefore, there is a need for more innovative techniques to assess and defend networked systems against cyber-attacks.
Offensive security testing techniques have been employed to assess the various security posture of networks by launching cyber attacks. Some of these testing techniques include: 1) the traditional penetration testing - where the testing focuses on identifying and exploiting the system and network vulnerabilities <|cite_start|> (Reference: Vulnerability Assessment & Penetration Testing as a Cyber Defence Technology: ) <|cite_end|>, and 2) the red teaming (RT) - which assesses a network resilience against cyber-attack by emulating real cyber attackers <|cite_start|> (Reference: Red Team vs. Blue Team Hardware Trojan Analysis: Detection of a Hardware Trojan on an Actual ASIC: We infiltrate the ASIC development chain by inserting a small denial-of-service (DoS) hardware Trojan at the fabrication design phase into an existing VLSI circuit, thereby simulating an adversary at a semiconductor foundry. Both the genuine and the altered ASICs have been fabricated using a 180 nm CMOS process. The Trojan circuit adds an overhead of only 0.5% to the original design. In order to detect the hardware Trojan, we perform side-channel analyses and apply IC-fingerprinting techniques using templates, principal component analysis (PCA), and support vector machines (SVMs). As a result, we were able to successfully identify and classify all infected ASICs from non-infected ones. To the best of our knowledge, this is the first hardware Trojan manufactured as an ASIC and has successfully been analyzed using side channels.) <|cite_end|>. The RT moves beyond the penetration testing by imitating real steps that an attacker would necessarily take.
However, conducting the red team exercise is a manual process and hence, the quality of security assessment can be depending on the quality of the red team members and their time and devotion to the test exercise.\\
On the other hand, automating the activities of the real attackers is faced with a great challenge of deciding the attacker's course of action.
Tools such as Attack Graphs (AG) <|cite_start|> (Reference: Computer-attack graph generation tool: This paper presents a tool for assessment of security attributes and vulnerabilities in computer networks. The tool generates attack graphs (Phillips and Swiler, 1998). Each node in the attack graph represents a possible attack state. Edges represent a change of state caused by a single action taken by the attacker or unwitting assistant, and are weighted by some metric (such as attacker effort or time to succeed). Generation of the attack graph requires algorithms that match information about attack requirements (specified in attack templates) to information about the network configuration and assumed attacker capabilities (attacker profile). The set of near-optimal shortest paths indicates the most exploitable components of the system configuration. This paper presents the status of the tool and discusses implementation issues, especially focusing on the data input needs and methods for eliminating redundant paths and nodes in the graph.) <|cite_end|> have been used to represent possible sequences of actions that attackers may take to achieve the attack goal, but the AG focuses on analyzing the network vulnerabilities and producing a set of attack paths with no indication of the attacker's specific attack plan. Moreover, with the increasing size of modern networks, the AG has exponential complexity and thus causing scalability problems <|cite_start|> (Reference: 22nd Annual Computer Security Applications Conference (ACSAC 2006), 11-15 December 2006, Miami Beach, Florida, USA: ) <|cite_end|>. Similarly, the Attack Trees (ATs) <|cite_start|> (Reference: Subjective Attack Trees: Subjective attack trees (SATs) extend traditional attack trees by taking into account the uncertainty about the probability values of security events. Assigning precise values is often difficult due to lack of knowledge, or insufficient historical data, making the evaluation of risk in existing approaches unreliable, and therefore unreliable security decisions. With SATs, the author seeks to better reflect the reality underpinning the model and offer a better approach to decision-making via the modeling of uncertainty about the probability distributions in the form of subjective opinions, resulting in a model taking second-order uncertainty into account. The author further discusses how to conduct security analysis, such as risk measuring and security investments analysis, under the proposed model. Security investments analysis requires first to incorporate the model with countermeasures and then study how these countermeasures reduce risk in the presence of uncertainty about probability values. The importance and advantage of the SAT model are demonstrated through extended examples.) <|cite_end|> represents attacks as a tree with leaf nodes and child nodes, where leaf nodes show different ways of achieving the goal, and child nodes represent attack steps.
However, the ATs does not explicitly reflect the sequences of attack path nor specify a workable attacker's attack plan. \\
Hong and Kim <|cite_start|> (Reference: Harms: Hierarchical attack representation models for network security analysis: Attack models can be used to assess network security. Purely graph based attack representation models (e.g., attack graphs) have a state-space explosion problem. Purely tree-based models (e.g., attack trees) cannot capture the path information explicitly. Moreover, the complex relationship between the host and the vulnerability information in attack models create difficulty in adjusting to changes in the network, which is impractical for modern large and dynamic network systems. To deal with these issues, we propose hierarchical attack representation models (HARMs). The main idea is to use two-layer hierarchy to separate the network topology information (in the upper layer) from the vulnerability information of each host (in the lower layer). We compare the HARMs with existing attack models (including attack graph and attack tree) in model complexity in the phase of construction, evaluation and modification.) <|cite_end|> <|cite_start|> (Reference: Towards scalable security analysis using multi-layered security models: ) <|cite_end|> addressed the scalability problem of AG by developing hierarchical models that combine (and separate the functionality of) the AGs and ATs unto two or more number of hierarchical layers (this model is named Hierarchical Attack Representation Models (HARM)). The HARM mainly comprises of two layers: the upper layer which captures the network reachability information (using an AG that models only the reachability information) and the lower layer that captures the vulnerability information of each node in the network (using ATs). \\
Although the HARM has been used to generate the set of possible attack paths (similar to the AGs) to reach a target node, it has not been used to plan a rational attacker's possible attack action. Hence, more work is needed to strategically plan the attacker's and the defender's possible actions in the network.
Since the HARM is more scalable and adaptable compared to the AGs and ATs, we utilize its functionality to achieve this goal. Specifically, we develop a deterministic planning strategy (named metrics-based planning) with the HARM to systematically plan attacks for automated adversary actions. Moreover, we propose a novel framework named HARMer to automate the modeling and execution of cyber-attacks and threats detection. We carry out experiments in real network and Amazon Web Services (AWS) to demonstrate and validate the framework. The proposed framework will provide a way to automatically perform security analysis and evaluation of a real system by performing a red team and blue team operations.
The major goals of this paper are summarized as follows.
\begin{itemize}
\item Develop a requirement specification for building automating cyber-attacks.
\item Propose a framework for automating and assessing cyber-attacks activities.
\item Develop an automated attack planner using a Graphical Security Model (GSM).
\item Demonstrate the framework using a case study network and experiments on the AWS.
\end{itemize}
\textbf{Contribution highlight:}
It is difficult for network defenders to employ offensive testing techniques to evaluate a network security posture because they need to frequently search for a well-defined attack scenario that may be open to attackers. This process is time-consuming, costly, and impractical to perform regularly. Moreover, it depends on the quality of the team members to effectively plan and execute attacks. In this paper, we attempt to answer the following questions; (1) What is the approach that can be used to capture the attack scenario of a real attacker?, (2) How can real attacks be automated?, and (3) How long will it take to perform the automated attacks?
To answer these questions, we propose a novel framework for automating the modeling of cyber-attacks. The framework will support the automatic assessment of network security by collecting attack information and then exploiting them, just like a real attacker would necessarily perform. By doing so, a defender can understand the appropriate network weak spots and deploy the best form of available cyber defense. \\
Existing frameworks that used the AGs to identify overall potential attack paths suffer from computational complexity. As a result, it is challenging to represent a full range of cyber-attacks with the AGs due to the numerous possibilities and choices that are available to the attacker. In this paper, we incorporate a scalable security model (HARM) to reduces this complexity <|cite_start|> (Reference: Towards scalable security analysis using multi-layered security models: ) <|cite_end|>.
Moreover, we develop and automate three new metric-based attack planning strategies that automatically generate a more specific and realistic attack path to use (because the HARM or AGs does not explicitly specify what attack path will be exploited per time). In addition, we model the networks with nodes and edges, in which the nodes have various attributes that model the node components such as the operating system (OS), vulnerabilities, open port, \textit{etc} in order to allow for multiple simulations in different scenarios. In Table \ref{tbl:contribution}, we highlight our contributions compared to similar approaches. We use the symbols \checkmark and \ding{55} to show paper contribution and those that did not, respectively.
The rest of this paper is organized as follows. Section~\ref{sec:related_work} gives summary of the related work. Section~\ref{sec:methodology} discusses methodology, requirement analysis, and the automation framework. Section~\ref{sec:planning_strategy} describes the proposed attack planning strategies. Section~\ref{sec:framework_demo} presents the illustration of the attack framework using a case study. In Section~\ref{sec:Experiments}, we present our experiments and results based on Amazon's AWS using two network models. Section~\ref{sec:discussion} discusses our results, limitations and future work. Lastly, Section~\ref{sec:conclusion} concludes the paper.
\begin{table*}[!ht]
\scriptsize
\centering
\caption{Contributions highlight}
\label{tbl:contribution}
\begin{tabular}{lccccccccc} \hline
& <|cite_start|> (Reference: Attack Graph Based Evaluation of Network Security: ) <|cite_end|> <|cite_start|> (Reference: An Attack Graph-Based Probabilistic Security Metric: ) <|cite_end|> <|cite_start|> (Reference: Dynamic security risk management using bayesian attack graphs: Security risk assessment and mitigation are two vital processes that need to be executed to maintain a productive IT infrastructure. On one hand, models such as attack graphs and attack trees have been proposed to assess the cause-consequence relationships between various network states, while on the other hand, different decision problems have been explored to identify the minimum-cost hardening measures. However, these risk models do not help reason about the causal dependencies between network states. Further, the optimization formulations ignore the issue of resource availability while analyzing a risk model. In this paper, we propose a risk management framework using Bayesian networks that enable a system administrator to quantify the chances of network compromise at various levels. We show how to use this information to develop a security mitigation and management plan. In contrast to other similar models, this risk model lends itself to dynamic analysis during the deployed phase of the network. A multiobjective optimization platform provides the administrator with all trade-off information required to make decisions in a resource constrained environment.) <|cite_end|> & <|cite_start|> (Reference: Knowledge-based Decision Making for Simulating Cyber Attack Behaviors: Knowledge-based Decision Making for Simulating Cyber Attack Behaviors Stephen Frank Moskal Supervising Professor: Dr. Shanchieh Jay Yang Computer networks are becoming more complex as the reliance on these network increases in this era of exponential technological growth. This makes the potential gains for criminal activity on these networks extremely serious and can not only devastate organizations or enterprises but also the general population. As complexity of the network increases so does the difficulty to protect the networks as more potential vulnerabilities are introduced. Despite best efforts, traditional defenses like Intrusion Detection Systems and penetration tests are rendered ineffective to even amateur cyber adversaries. Networks now need to be analyzed at all times to preemptively detect weaknesses which harbored a new research field called Cyber Threat Analytics. However, current techniques for cyber threat analytics typically perform static analysis on the network and system vulnerabilities but few address the most variable and most critical piece of the puzzle – the attacker themselves. This work focuses on defining a baseline framework for modeling a wide variety of cyber attack behaviors which can be used in conjunction with a cyber attack simulator to analyze the effects of individual or multiple attackers on a network. To model a cyber attacker’s behaviors with reasonable accuracy and flexibility, the model must be based on aspects of an attacker that are used in real scenarios. Real cyber attackers base their decisions on what they know and learn about the network, vulnerabilities, and targets. This attacker behavior model introduces the aspect of knowledge-based decision making to cyber attack behavior modeling with the goal of providing user configurable options. This behavior model employs Cyber Attack Kill Chain R ©along with an ensemble of the attacker) <|cite_end|> & <|cite_start|> (Reference: {Cyber-attack and Defense Simulation Framework: Various papers on cyberwarfare in virtual environments and cybersecurity in intelligent systems have been published. Work has focused on the integration of cyberwarfare communication effects into a live–virtual–constructive (LVC) environment in order to better represent a network centric battlespace subject to cyber-attack, at a large force level. In addition, virtual cyber ranges have been developed. A virtual cyber range is a portable modeling and simulation framework that provides a real-time, hardware-in-the-loop capability for simulation of cyber threats to the entire net-centric infrastructure. The framework enables interoperability with LVC simulations, providing training and assessment of human-in-the-loop performance. This work builds on previous work and shows the need for a framework to support the modeling and simulation of cyber-attack and defense for training and assessment. This work does focus on cybersecurity for wireless communications, but the focus is on large network-centric systems at force level. Furthermore, there are published descriptions of research focused on the use of multi-agent systems in vehicle-to-vehicle communication and autonomous driving systems, such as Google’s driverless car. While the vehicles are commercial by nature, there are many similarities to military autonomous vehicles (MAVs), in their system components and architecture as well as the cybersecurity threats that they face. While these works are related to the work presented in this paper, with their focus on synthetic environments and commercial autonomous systems, they are not closely related. The focus of this paper is on small point to point and point to multipoint wireless networks, comprised of only a few nodes, with the nodes representing MAVs and controllers. The US Army, Air Force, and Navy are continuing to increase their use of these systems, and are now focusing on cybersecurity. Due to the lagging focus on wireless networks and autonomous vehicles, there is not much published work that is related. Current unmanned systems were not built with cybersecurity considerations taken into account, and are thus vulnerable to cyber-attack. The objective of this paper is to present and describe the need for a cyber-attack and defense simulation framework to support the modeling and simulation for cybersecurity of autonomous vehicle systems used by US Armed Forces. These autonomous vehicle systems include unmanned aerial systems and unmanned ground systems. The paper describes a notional framework to support this type of modeling, as well as a detailed use case and example cyber-attack simulation system.) <|cite_end|> & <|cite_start|> (Reference: A machine learning framework for investigating data breaches based on semantic analysis of adversary's attack patterns in threat intelligence repositories: ) <|cite_end|> & <|cite_start|> (Reference: {Visual Analytics for Cyber Red Teaming: Our team is currently developing an Automated Cyber Red Teaming system that, when given a model-based capture of an organisation's network, uses automated planning techniques to generate and assess multi-stage attacks. Specific to this paper, we discuss our development of the visual analytic component of this system. Through various views that display network attacks paths at different levels of abstraction, our tool aims to enhance cyber situation awareness of human decision makers.) <|cite_end|> & <|cite_start|> (Reference: Attack Planning in the Real World: Assessing network security is a complex and difficult task. Attack graphs have been proposed as a tool to help network administrators understand the potential weaknesses of their network. However, a problem has not yet been addressed by previous work on this subject; namely, how to actually execute and validate the attack paths resulting from the analysis of the attack graph. In this paper we present a complete PDDL representation of an attack model, and an implementation that integrates a planner into a penetration testing tool. This allows to automatically generate attack paths for penetration testing scenarios, and to validate these attacks by executing the corresponding actions -including exploits- against the real target network. We present an algorithm for transforming the information present in the penetration testing tool to the planning domain, and show how the scalability issues of attack graphs can be solved using current planners. We include an analysis of the performance of our solution, showing how our model scales to medium-sized networks and the number of actions available in current penetration testing tools.) <|cite_end|> <|cite_start|> (Reference: An Algorithm to Find Optimal Attack Paths in Nondeterministic Scenarios: As penetration testing frameworks have evolved and have become more complex, the problem of controlling automatically the pentesting tool has become an important question. This can be naturally addressed as an attack planning problem. Previous approaches to this problem were based on modeling the actions and assets in the PDDL language, and using off-the-shelf AI tools to generate attack plans. These approaches however are limited. In particular, the planning is classical (the actions are deterministic) and thus not able to handle the uncertainty involved in this form of attack planning. We herein contribute a planning model that does capture the uncertainty about the results of the actions, which is modeled as a probability of success of each action. We present efficient planning algorithms, specifically designed for this problem, that achieve industrial-scale runtime performance (able to solve scenarios with several hundred hosts and exploits). These algorithms take into account the probability of success of the actions and their expected cost (for example in terms of execution time, or network traffic generated). We thus show that probabilistic attack planning can be solved efficiently for the scenarios that arise when assessing the security of large networks. Two "primitives" are presented, which are used as building blocks in a framework separating the overall problem into two levels of abstraction. We also present the experimental results obtained with our implementation, and conclude with some ideas for further work.) <|cite_end|> & <|cite_start|> (Reference: SCERM - A novel framework for automated management of cyber threat response activities: ) <|cite_end|>& <|cite_start|> (Reference: Harms: Hierarchical attack representation models for network security analysis: Attack models can be used to assess network security. Purely graph based attack representation models (e.g., attack graphs) have a state-space explosion problem. Purely tree-based models (e.g., attack trees) cannot capture the path information explicitly. Moreover, the complex relationship between the host and the vulnerability information in attack models create difficulty in adjusting to changes in the network, which is impractical for modern large and dynamic network systems. To deal with these issues, we propose hierarchical attack representation models (HARMs). The main idea is to use two-layer hierarchy to separate the network topology information (in the upper layer) from the vulnerability information of each host (in the lower layer). We compare the HARMs with existing attack models (including attack graph and attack tree) in model complexity in the phase of construction, evaluation and modification.) <|cite_end|> <|cite_start|> (Reference: Towards scalable security analysis using multi-layered security models: ) <|cite_end|>& This paper \\ \hline \hline
Automation framework & \ding{55} &\checkmark &\checkmark &\checkmark &\checkmark & \ding{55} &\checkmark & \ding{55} &\checkmark \\
Detailed attack planning & \ding{55} &\ding{55} &\ding{55} &\checkmark &\ding{55} &\checkmark & \ding{55} & \ding{55} &\checkmark \\
Scalable GSM &\ding{55} & \ding{55} & \ding{55} & \ding{55} & \ding{55} &\ding{55} & \ding{55} &\checkmark &\checkmark \\
Experiments &\checkmark &\checkmark &\checkmark &\checkmark & \checkmark &\ding{55} & \checkmark & \ding{55} &\checkmark \\ \hline
\end{tabular}
\end{table*}
Related Work
\label{sec:related_work}
We discuss the state-of-the art work on automating cyber-attacks and defenses.
\subsection{Security Model Automation for Red Team and Blue Team}
\label{sec_sub:relatedWOrk_model_attack_Def}
There are a lot of works that addressed the problem of assessing the security of network systems using different types of automation approaches. We discuss the related work in two aspects: security models, and attack \& defense framework.
\textbf{Security Models:}
One of the popular use of automation for red team activities is the use of AGs. The AG provides a way for the red team to generate possible sequences of attack steps to gain access to a target using network reachability information and a set of vulnerabilities. The work of Phillipsi \& Swiler <|cite_start|> (Reference: {A Graph-based System for Network Vulnerability Analysis: This paper presents a graph-based approach to network vulnerability analysis. The method is flexible, allowing analysis of attacks from both outside and inside the network. It can analyze risks to a specific network asset, or examine the universe of possible consequences following a successful attack. The graph-based tool can identify the set of attack paths that have a high probability of success (or a low effort cost) for the attacker. The system could be used to test the effectiveness of making configuration changes, implementing an intrusion detection system, etc. The analysis system requires as input a database of common attacks, broken into atomic steps, specific network configuration and topology information, and an attacker profile. The attack information is matched with the network configuration information and an attacker profile to create a superset attack graph. Nodes identify a stage of attack, for example the class of machines the attacker has accessed and the user privilege level he or she has compromised. The arcs in the attack graph represent attacks or stages of attacks. By assigning probabilities of success on the arcs or costs representing level-of-effort for the attacker, various graph algorithms such as shortest-path algorithms can identify the attack paths with the highest probability of success.) <|cite_end|> is one of the earlier work that developed a graph-based tool to assess the risks to a networked system by identifying the set of attack paths with a high probability of success or low attack costs for the attacker. This tool provided a way to test the effectiveness of defenses (such as intrusion detection systems, firewall rules changes, etc).
Sheyner \textit{et al.} <|cite_start|> (Reference: Automated Generation and Analysis of Attack Graphs: An integral part of modeling the global view of network security is constructing attack graphs. Manual attack graph construction is tedious, error-prone, and impractical for attack graphs larger than a hundred nodes. In this paper we present an automated technique for generating and analyzing attack graphs. We base our technique on symbolic model checking algorithms, letting us construct attack graphs automatically and efficiently. We also describe two analyses to help decide which attacks would be most cost-effective to guard against. We implemented our technique in a tool suite and tested it on a small network example, which includes models of a firewall and an intrusion detection system.) <|cite_end|> presented an automated approach to generating and analyzing AGs based on symbolic model checking algorithm. Besides, they performed minimization analysis on the AG to determine the minimal sets of atomic attacks that must be prevented in order to guarantee that the attacker cannot reach his goal. Kotenko and Stepashkin <|cite_start|> (Reference: Attack Graph Based Evaluation of Network Security: ) <|cite_end|> utilized the AGs to simulate and evaluate the attacker's actions (based on vulnerabilities). To improve security, Kotenko and Stepashkin checked the various properties of the AGs and then used various security metrics to determine ways to prevent possible attacks.
Wang \textit{et al.} <|cite_start|> (Reference: An Attack Graph-Based Probabilistic Security Metric: ) <|cite_end|> proposed an AG-based probabilistic metric to measure the likelihood of sophisticated attacks combining multiple vulnerabilities to reach the attacker target. Poolsappasit \textit{et al.} <|cite_start|> (Reference: Dynamic security risk management using bayesian attack graphs: Security risk assessment and mitigation are two vital processes that need to be executed to maintain a productive IT infrastructure. On one hand, models such as attack graphs and attack trees have been proposed to assess the cause-consequence relationships between various network states, while on the other hand, different decision problems have been explored to identify the minimum-cost hardening measures. However, these risk models do not help reason about the causal dependencies between network states. Further, the optimization formulations ignore the issue of resource availability while analyzing a risk model. In this paper, we propose a risk management framework using Bayesian networks that enable a system administrator to quantify the chances of network compromise at various levels. We show how to use this information to develop a security mitigation and management plan. In contrast to other similar models, this risk model lends itself to dynamic analysis during the deployed phase of the network. A multiobjective optimization platform provides the administrator with all trade-off information required to make decisions in a resource constrained environment.) <|cite_end|> proposed Bayesian AGs to quantify the likelihood of a network being compromised at different levels. Based on the level's information, Poolsappasit \textit{et al.} developed a security mitigation and management plan for the network administrator. \\
\textit{Summary: }The aforementioned AG approaches focused on generating a set of attack paths to the attack goal with no indication of a specific attack path that at adversary may use per time. Hence, it is difficult to use the AGs to automate the real-world interaction between the attacker and the defender since no specific plan is shown.\\
\textbf{Framework for cyber-attacks:}
An attack framework will provide a structure and flow to combine the analysis and evaluations of cyber-threats . In this section, we present the state-of-the-art automation framework.
Moskal <|cite_start|> (Reference: Knowledge-based Decision Making for Simulating Cyber Attack Behaviors: Knowledge-based Decision Making for Simulating Cyber Attack Behaviors Stephen Frank Moskal Supervising Professor: Dr. Shanchieh Jay Yang Computer networks are becoming more complex as the reliance on these network increases in this era of exponential technological growth. This makes the potential gains for criminal activity on these networks extremely serious and can not only devastate organizations or enterprises but also the general population. As complexity of the network increases so does the difficulty to protect the networks as more potential vulnerabilities are introduced. Despite best efforts, traditional defenses like Intrusion Detection Systems and penetration tests are rendered ineffective to even amateur cyber adversaries. Networks now need to be analyzed at all times to preemptively detect weaknesses which harbored a new research field called Cyber Threat Analytics. However, current techniques for cyber threat analytics typically perform static analysis on the network and system vulnerabilities but few address the most variable and most critical piece of the puzzle – the attacker themselves. This work focuses on defining a baseline framework for modeling a wide variety of cyber attack behaviors which can be used in conjunction with a cyber attack simulator to analyze the effects of individual or multiple attackers on a network. To model a cyber attacker’s behaviors with reasonable accuracy and flexibility, the model must be based on aspects of an attacker that are used in real scenarios. Real cyber attackers base their decisions on what they know and learn about the network, vulnerabilities, and targets. This attacker behavior model introduces the aspect of knowledge-based decision making to cyber attack behavior modeling with the goal of providing user configurable options. This behavior model employs Cyber Attack Kill Chain R ©along with an ensemble of the attacker) <|cite_end|> presented a framework for modeling cyber-attack behaviors for use with existing attack simulators in order to analyze the effects of single or multiple attackers on a network. This framework utilizes Cyber Kill Chain behavior to model an attacker's decisions while taking into account what the attacker knows, how the attacker learns about the network, the vulnerabilities, and targets.
Similar to our work is the extension provided by Moskal \textit{et al.} <|cite_start|> (Reference: Cyber threat assessment via attack scenario simulation using an integrated adversary and network modeling approach: Existing research on cyber threat assessment focuses on analyzing the network vulnerabilities and producing possible attack graphs. Cyber attacks in real-world enterprise networks, however, vary significantly due to not only network and system configurations, but also the attacker’s strategies. This work proposes a cyber-based attacker behavior model (ABM) in conjunction with the Cyber Attack Scenario and Network Defense Simulator to model the interaction between the network and the attackers. The ABM leverages a knowledge-based design and factors in the capability, opportunity, intent, preference, and Cyber Attack Kill Chain integration to model various types of attackers. By varying the types of attackers and the network configurations, and simulating their interactions, we present a method to measure the overall network security against cyber attackers under different scenarios. Simulation results based on four attacker types on two network configurations are shown to demonstrate how different attacker behaviors may lead to different ways to penetrate a network, and how a single misconfiguration may impact network security.) <|cite_end|>, which proposed the red and blue team's simulation framework to show the interplay between an attacker and defender. The framework was defined based on the network, the attackers, and the intentions, the dependencies between the attacker and the network including capabilities and preferences. Furthermore, they showed an assessment approach of how different attack scenarios may occur under different attacker's intent, opportunity, capability, and preference against a network configuration. However, our work is richer in terms of attack planning strategies.
Matherly <|cite_start|> (Reference: The Red Teaming Essential: ) <|cite_end|> provided a theoretical framework to investigate and identify the best strategy for combining red teams and social psychology techniques to improve adversary prediction. Bergin <|cite_start|> (Reference: {Cyber-attack and Defense Simulation Framework: Various papers on cyberwarfare in virtual environments and cybersecurity in intelligent systems have been published. Work has focused on the integration of cyberwarfare communication effects into a live–virtual–constructive (LVC) environment in order to better represent a network centric battlespace subject to cyber-attack, at a large force level. In addition, virtual cyber ranges have been developed. A virtual cyber range is a portable modeling and simulation framework that provides a real-time, hardware-in-the-loop capability for simulation of cyber threats to the entire net-centric infrastructure. The framework enables interoperability with LVC simulations, providing training and assessment of human-in-the-loop performance. This work builds on previous work and shows the need for a framework to support the modeling and simulation of cyber-attack and defense for training and assessment. This work does focus on cybersecurity for wireless communications, but the focus is on large network-centric systems at force level. Furthermore, there are published descriptions of research focused on the use of multi-agent systems in vehicle-to-vehicle communication and autonomous driving systems, such as Google’s driverless car. While the vehicles are commercial by nature, there are many similarities to military autonomous vehicles (MAVs), in their system components and architecture as well as the cybersecurity threats that they face. While these works are related to the work presented in this paper, with their focus on synthetic environments and commercial autonomous systems, they are not closely related. The focus of this paper is on small point to point and point to multipoint wireless networks, comprised of only a few nodes, with the nodes representing MAVs and controllers. The US Army, Air Force, and Navy are continuing to increase their use of these systems, and are now focusing on cybersecurity. Due to the lagging focus on wireless networks and autonomous vehicles, there is not much published work that is related. Current unmanned systems were not built with cybersecurity considerations taken into account, and are thus vulnerable to cyber-attack. The objective of this paper is to present and describe the need for a cyber-attack and defense simulation framework to support the modeling and simulation for cybersecurity of autonomous vehicle systems used by US Armed Forces. These autonomous vehicle systems include unmanned aerial systems and unmanned ground systems. The paper describes a notional framework to support this type of modeling, as well as a detailed use case and example cyber-attack simulation system.) <|cite_end|> presented a cyber-attack and defense simulation framework to support the modeling and simulation of cyber-attack and defense for training and assessment. The work focused on modeling and simulation for cybersecurity of autonomous vehicle systems (wireless communications) used by US Armed Forces.
Applebaum \textit{et al.} <|cite_start|> (Reference: Intelligent, automated red team emulation: Red teams play a critical part in assessing the security of a network by actively probing it for weakness and vulnerabilities. Unlike penetration testing - which is typically focused on exploiting vulnerabilities - red teams assess the entire state of a network by emulating real adversaries, including their techniques, tactics, procedures, and goals. Unfortunately, deploying red teams is prohibitive: cost, repeatability, and expertise all make it difficult to consistently employ red team tests. We seek to solve this problem by creating a framework for automated red team emulation, focused on what the red team does post-compromise - i.e., after the perimeter has been breached. Here, our program acts as an automated and intelligent red team, actively moving through the target network to test for weaknesses and train defenders. At its core, our framework uses an automated planner designed to accurately reason about future plans in the face of the vast amount of uncertainty in red teaming scenarios. Our solution is custom-developed, built on a logical encoding of the cyber environment and adversary profiles, using techniques from classical planning, Markov decision processes, and Monte Carlo simulations. In this paper, we report on the development of our framework, focusing on our planning system. We have successfully validated our planner against other techniques via a custom simulation. Our tool itself has successfully been deployed to identify vulnerabilities and is currently used to train defending blue teams.) <|cite_end|> developed a framework that used MITRE framework-Adversarial Tactics, Techniques, and Common Knowledge (ATT\&CK). Their framework specifically takes into account the post-compromise effect that an adversary can take in a network.
Choo \textit{et al.} <|cite_start|> (Reference: {Automated Red Teaming: A Proposed Framework for Military Application: In this paper, we describe Automated Red Teaming (ART), a concept that uses Evolutionary Algorithm (EA), Parallel Computing and Simulation to complement the manual Red Teaming effort to uncover system vulnerabilities or to find exploitable gaps in military operational concepts. The overall goal is to reduce surprises, improve and ensure the robustness of the Blue ops concepts. The design of key components and techniques that are required to develop an ART framework are described and discussed. An experiment with a military scenario in Urban Operations (UO) was conducted and the results analyzed to demonstrate the capability of the ART framework. Results showed that Red Force survivability can be improved by 27% just by modifying behavioral parameters alone. These findings could be used by Blue Force to refine their tactics and strategy thereby ensuring robustness of plans and higher mission success.) <|cite_end|> leveraged parallel processing,
evolutionary algorithms and agent-based simulations to develop an automated RT (ART) framework for a military operation. The framework consists of (1) ART parameter setting interface which will allow the initial selection of the parameters that are to be varied, (2) ART controller - controls and coordinates the whole process of the framework, (3) the simulation model-dependent modules add a layer of data flow to and from the framework and simulation model of the parameters to be executed, (4) the EA module prepared the parameters for the simulations and analysis using any of the EAs, (5) the condor provides a job queuing, scheduling policy and resource management for distributed computing and (6) the output module is used to provide feedback, update and run results. In Chua \textit{et al.} <|cite_start|> (Reference: {Automated Red Teaming: An Objective-based Data Farming Approach for Red Teaming: In this paper, we describe an objective-based data farming approach for red teaming called automated red teaming (ART). The main idea is to develop an ART framework using evolutionary algorithms (EAs), parallel computing and simulation, and apply it to uncover exploitable gaps in military operational concepts, complementing the manual red teaming (MRT) effort. The capability of the ART framework was evaluated vis-a-vis MRT using two maritime security scenarios addressed at the International Data Farming Workshops (IDFWs) 14 and 15. The evaluation showed that, in general, results from ART were better than those obtained from MRT, some of which were non-intuitive and surprising solutions.) <|cite_end|>, the capability of the ART framework in <|cite_start|> (Reference: {Automated Red Teaming: A Proposed Framework for Military Application: In this paper, we describe Automated Red Teaming (ART), a concept that uses Evolutionary Algorithm (EA), Parallel Computing and Simulation to complement the manual Red Teaming effort to uncover system vulnerabilities or to find exploitable gaps in military operational concepts. The overall goal is to reduce surprises, improve and ensure the robustness of the Blue ops concepts. The design of key components and techniques that are required to develop an ART framework are described and discussed. An experiment with a military scenario in Urban Operations (UO) was conducted and the results analyzed to demonstrate the capability of the ART framework. Results showed that Red Force survivability can be improved by 27% just by modifying behavioral parameters alone. These findings could be used by Blue Force to refine their tactics and strategy thereby ensuring robustness of plans and higher mission success.) <|cite_end|> was evaluated against the manual RT using two maritime security scenarios.
Yuen \textit{et al.} <|cite_start|> (Reference: {Visual Analytics for Cyber Red Teaming: Our team is currently developing an Automated Cyber Red Teaming system that, when given a model-based capture of an organisation's network, uses automated planning techniques to generate and assess multi-stage attacks. Specific to this paper, we discuss our development of the visual analytic component of this system. Through various views that display network attacks paths at different levels of abstraction, our tool aims to enhance cyber situation awareness of human decision makers.) <|cite_end|> developed an ART framework that uses automated planning and knowledge representation techniques to conduct the RT exercise. The high-level view of the framework consists of the world model (i.e., the overall system that is being red-teamed), AI planner, AG generator, threat analysis, course of action planning, change deployment, \textit{etc}.
Noor \textit{et al.} <|cite_start|> (Reference: A machine learning framework for investigating data breaches based on semantic analysis of adversary's attack patterns in threat intelligence repositories: ) <|cite_end|> presented a machine learning framework for investigating data breaches based on common patterns from threat repositories. The framework reasons on security incidence by mapping low-level threat artifacts to high-level adversary tactics, techniques, and procedures in a way that machines can identify these connections with certain probabilities. In <|cite_start|> (Reference: A supervised machine learning based approach for automatically extracting high-level threat intelligence from unstructured sources: The last few years have seen a radical shift in the cyber defense paradigm from reactive to proactive, and this change is marked by the steadily increasing trend of Cyber Threat Intelligence (CTI) sharing. Currently, there are numerous Open Source Intelligence (OSINT) sources providing periodically updated threat feeds that are fed into various analytical solutions. At this point, there is an excessive amount of data being produced from such sources, both structured (STIX, OpenIOC, etc.) as well as unstructured (blacklists, etc.). However, more often than not, the level of detail required for making informed security decisions is missing from threat feeds, since most indicators are atomic in nature, like IPs and hashes, which are usually rather volatile. These feeds distinctly lack strategic threat information, like attack patterns and techniques that truly represent the behavior of an attacker or an exploit. Moreover, there is a lot of duplication in threat information and no single place where one could explore the entirety of a threat, hence requiring hundreds of man hours for sifting through numerous sources — trying to discern signal from noise — to find all the credible information on a threat. We have made use of natural language processing to extract threat feeds from unstructured cyber threat information sources with approximately 70\% precision, providing comprehensive threat reports in standards like STIX, which is a widely accepted industry standard that represents CTI. The automation of an otherwise tedious manual task would ensure the timely gathering and sharing of relevant CTI that would give organizations the edge to be able to proactively defend against known as well as unknown threats.) <|cite_end|>, the authors presented a machine learning-based approach to automatically extract cyber threats information such as attack patterns and techniques that may represent attacker behaviors or attack exploits. \\
\textit{Summary:} These approaches are different from our framework as they have focused on mapping information from existing repositories or threat artifacts based on the probability of attacks while our proposed automation framework is based on real-time attack information and execution on a network. Moreover, the frameworks lack sufficient attack planning methods or they are based on a theoretical framework.
\subsection{Attack Planning}
\label{sec_sub:relatedWOrk_planning}
Identifying a workable attack path can be time-consuming for the RT, and so automated planning techniques are being considered as a feasible method of discovering possible attack paths for automating the RT agent.
There are a few works on the application of planning techniques for reasoning in emulation/simulation of attacker behavior. Boddy \textit{et al.} <|cite_start|> (Reference: Course of action generation for cyber security using classical planning: We report on the results of applying classical planning techniques to the problem of analyzing computer network vulnerabilities. Specifically, we are concerned with the generation of Adversary Courses of Action, which are extended sequences of exploits leading from some initial state to an attacker's goal. In this application, we have demonstrated the generation of attack plans for a simple but realistic web-based document control system, with excellent performance compared to the prevailing state of the art in this area.
In addition to the new capabilities gained in the area of vulnerability analysis, this implementation provided some insights into performance and modeling issues for classical planning systems, both specifically with regard to METRIC-FF and other forward heuristic planners, and more generally for classical planning. To facilitate additional work in this area, the domain model on which this work was done will be made freely available. See the paper's Conclusion for details.) <|cite_end|> presented an approach for the generation of adversary courses of action from the initial state to the target machine using a classical planning technique. This planning approach was used to predict the attacker's actions.
Obes \textit{et al.} <|cite_start|> (Reference: Attack Planning in the Real World: Assessing network security is a complex and difficult task. Attack graphs have been proposed as a tool to help network administrators understand the potential weaknesses of their network. However, a problem has not yet been addressed by previous work on this subject; namely, how to actually execute and validate the attack paths resulting from the analysis of the attack graph. In this paper we present a complete PDDL representation of an attack model, and an implementation that integrates a planner into a penetration testing tool. This allows to automatically generate attack paths for penetration testing scenarios, and to validate these attacks by executing the corresponding actions -including exploits- against the real target network. We present an algorithm for transforming the information present in the penetration testing tool to the planning domain, and show how the scalability issues of attack graphs can be solved using current planners. We include an analysis of the performance of our solution, showing how our model scales to medium-sized networks and the number of actions available in current penetration testing tools.) <|cite_end|> used Planning Domain Definition Language (PDDL) description of network hosts, vulnerabilities, and exploit to generate attack paths which were integrated into a penetration testing (pentest) tool. Elsbroek \textit{et al.} also used the PDDL to generate attack paths for a pentest tool.
Sarraute \textit{et al.} <|cite_start|> (Reference: An Algorithm to Find Optimal Attack Paths in Nondeterministic Scenarios: As penetration testing frameworks have evolved and have become more complex, the problem of controlling automatically the pentesting tool has become an important question. This can be naturally addressed as an attack planning problem. Previous approaches to this problem were based on modeling the actions and assets in the PDDL language, and using off-the-shelf AI tools to generate attack plans. These approaches however are limited. In particular, the planning is classical (the actions are deterministic) and thus not able to handle the uncertainty involved in this form of attack planning. We herein contribute a planning model that does capture the uncertainty about the results of the actions, which is modeled as a probability of success of each action. We present efficient planning algorithms, specifically designed for this problem, that achieve industrial-scale runtime performance (able to solve scenarios with several hundred hosts and exploits). These algorithms take into account the probability of success of the actions and their expected cost (for example in terms of execution time, or network traffic generated). We thus show that probabilistic attack planning can be solved efficiently for the scenarios that arise when assessing the security of large networks. Two "primitives" are presented, which are used as building blocks in a framework separating the overall problem into two levels of abstraction. We also present the experimental results obtained with our implementation, and conclude with some ideas for further work.) <|cite_end|> addressed the problem of attack planning by taking into account uncertainty about the results of the attacker's actions, then modeling it as the probability of success for each action. In another work, Sarraute \textit{et al.} <|cite_start|> (Reference: Penetration testing== pomdp solving?: Penetration Testing is a methodology for assessing network security, by generating and executing possible attacks. Doing so automatically allows for regular and systematic testing without a prohibitive amount of human labor. A key question then is how to generate the attacks. This is naturally formulated as a planning problem. Previous work (Lucangeli et al. 2010) used classical planning and hence ignores all the incomplete knowledge that characterizes hacking. More recent work (Sarraute et al. 2011) makes strong independence assumptions for the sake of scaling, and lacks a clear formal concept of what the attack planning problem actually is. Herein, we model that problem in terms of partially observable Markov decision processes (POMDP). This grounds penetration testing in a well-researched formalism, highlighting important aspects of this problem's nature. POMDPs allow to model information gathering as an integral part of the problem, thus providing for the first time a means to intelligently mix scanning actions with actual exploits.) <|cite_end|> modeled the attack planning problem in terms of Partially Observable Markov Decision Processes (POMDP) for a pentest. Applebaum \textit{et al.} <|cite_start|> (Reference: Intelligent, automated red team emulation: Red teams play a critical part in assessing the security of a network by actively probing it for weakness and vulnerabilities. Unlike penetration testing - which is typically focused on exploiting vulnerabilities - red teams assess the entire state of a network by emulating real adversaries, including their techniques, tactics, procedures, and goals. Unfortunately, deploying red teams is prohibitive: cost, repeatability, and expertise all make it difficult to consistently employ red team tests. We seek to solve this problem by creating a framework for automated red team emulation, focused on what the red team does post-compromise - i.e., after the perimeter has been breached. Here, our program acts as an automated and intelligent red team, actively moving through the target network to test for weaknesses and train defenders. At its core, our framework uses an automated planner designed to accurately reason about future plans in the face of the vast amount of uncertainty in red teaming scenarios. Our solution is custom-developed, built on a logical encoding of the cyber environment and adversary profiles, using techniques from classical planning, Markov decision processes, and Monte Carlo simulations. In this paper, we report on the development of our framework, focusing on our planning system. We have successfully validated our planner against other techniques via a custom simulation. Our tool itself has successfully been deployed to identify vulnerabilities and is currently used to train defending blue teams.) <|cite_end|> and Miller \textit{et al.} <|cite_start|> (Reference: Automated Adversary Emulation: A Case for Planning and Acting with Unknowns: ,) <|cite_end|> used classical planning, Markov Decision Processes, and Monte Carlo simulations to plan attacks for an automated red teaming system (named Caldera).
Ghost \textit{et al.} <|cite_start|> (Reference: An intelligent technique for generating minimal attack graph: Attack graph is a tool to analyze multi-stage, multi-host attack scenarios in a network. It is a complete graph where each attack scenario is depicted by an attack path which is essentially a series of exploits. Each exploit in the series satisfies the pre-conditions for subsequent exploits and makes a casual relationship among them. One of the intrinsic problem with the generation of such a full attack graph is its scalability. In this work, an approach based on planner has been proposed for time-efficient scalable representation of the attack graphs. A planner is a special purpose search algorithm from artificial intelligence domain, used for finding out solutions within a large state space without suffering state space explosion. A case study has also been presented and the proposed methodology is found to be efficient than some of the earlier reported works.) <|cite_end|> proposed an approach based on a search algorithm for the AG that automatically generates attack paths (i.e., using a planner as a low-level module). Durkota \textit{et al.} <|cite_start|> (Reference: Hardening networks against strategic attackers using attack graph games: ) <|cite_end|> used AGs to determine attacker's next actions. The authors compute the attacker's set of possible actions based on AG reduction.
Randhawa \textit{et al. } <|cite_start|> (Reference: {Mission-Centric Automated Cyber Red Teaming: Cyberspace is ubiquitous and is becoming increasingly critical to many societal, commercial, military, and national functions as it emerges as an operational space in its own right. Within this context, decision makers must achieve mission continuity when operating in cyberspace. One aspect of any comprehensive security program is the use of penetration testing; the use of scanning, enumeration and offensive techniques not unlike those used by a potential adversary. Effective penetration testing provides security insight into the network as a system in its entirety. Often though, this systemic view is lost in reporting outcomes, instead becoming a list of vulnerable or exploitable systems that are individually evaluated for remediation priority. This paper introduces Trogdor; a mission-centric automated cyber red-teaming system. Trogdor undertakes model based Automated Cyber Red Teaming (ACRT) and critical node analysis to visually present the impact of vulnerable resources to cyber dependent missions. Specifically, this work discusses the purpose of Trogdor, outlines its architecture, design choices and the technologies it employs. This paper describes an application of Trogdor to an enterprise network scenario; specifically, how Trogdor provides an understanding of potential mission impacts arising from cyber vulnerabilities and mission or business-centric decision support in selecting possible strategies to mitigate those impacts.) <|cite_end|> presented an automated planning and cyber red-teaming system called Trogdor. Randhawa \textit{et al. } described Trogdor as a mission-centric red-teaming and defensive decision support system that can generate and visualize potential attack paths for known vulnerabilities for a networked system. The Trogdor used domain ontologies to describe the target environment; the network information and inter-dependencies between them, and the known software or hardware vulnerabilities.
Ghanem \& Chen <|cite_start|> (Reference: Reinforcement learning for efficient network penetration testing: Penetration testing (also known as pentesting or PT) is a common practice for actively assessing the defenses of a computer network by planning and executing all possible attacks to discover and exploit existing vulnerabilities. Current penetration testing methods are increasingly becoming non-standard, composite and resource-consuming despite the use of evolving tools. In this paper, we propose and evaluate an AI-based pentesting system which makes use of machine learning techniques, namely reinforcement learning (RL) to learn and reproduce average and complex pentesting activities. The proposed system is named Intelligent Automated Penetration Testing System (IAPTS) consisting of a module that integrates with industrial PT frameworks to enable them to capture information, learn from experience, and reproduce tests in future similar testing cases. IAPTS aims to save human resources while producing much-enhanced results in terms of time consumption, reliability and frequency of testing. IAPTS takes the approach of modeling PT environments and tasks as a partially observed Markov decision process (POMDP) problem which is solved by POMDP-solver. Although the scope of this paper is limited to network infrastructures PT planning and not the entire practice, the obtained results support the hypothesis that RL can enhance PT beyond the capabilities of any human PT expert in terms of time consumed, covered attacking vectors, accuracy and reliability of the outputs. In addition, this work tackles the complex problem of expertise capturing and re-use by allowing the IAPTS learning module to store and re-use PT policies in the same way that a human PT expert would learn but in a more efficient way.) <|cite_end|> proposed a reinforcement learning (RL) technique, where the system (named IAPTS) is modeled as a POMDP, and tested using an external POMDP-solver with different algorithms. According to Ghenem \& Chen, the proposed system can act as a module and can be integrated with most of the industrial pentesting frameworks to improve efficiency and accuracy. Similarly, Zennaro and Erdodi <|cite_start|> (Reference: Modeling penetration testing with reinforcement learning using capture-the-flag challenges and tabular Q-learning: Penetration testing is a security exercise aimed at assessing the security of a system by simulating attacks against it. So far, penetration testing has been carried out mainly by trained human attackers and its success critically depended on the available expertise. Automating this practice constitutes a non-trivial problem, as the range of actions that a human expert may attempts against a system and the range of knowledge she relies on to take her decisions are hard to capture. In this paper, we focus our attention on simplified penetration testing problems expressed in the form of capture the flag hacking challenges, and we apply reinforcement learning algorithms to try to solve them. In modelling these capture the flag competitions as reinforcement learning problems we highlight the specific challenges that characterize penetration testing. We observe these challenges experimentally across a set of varied simulations, and we study how different reinforcement learning techniques may help us addressing these challenges. In this way we show the feasibility of tackling penetration testing using reinforcement learning, and we highlight the challenges that must be taken into consideration, and possible directions to solve them.) <|cite_end|> presented a penetration testing approach using different RL techniques in a simulation. The focus of their work is to understand the feasibility of using RL techniques for RT. In our work, we focused on automatic attack execution on a real network.
Technologies such as Parallel Computing and Evolutionary Algorithms (EAs) are used to plan the red teaming exercise as well, where the Parallel Computing is leveraged to perform millions of simulations runs in an automated way, while EAs is used to optimize the required fitness value that can serve as the objective function. Specifically, the evolutionary algorithm is used to plan and decide defense options within the least amount of time. For example, Choo \textit{et al.} <|cite_start|> (Reference: {Automated Red Teaming: A Proposed Framework for Military Application: In this paper, we describe Automated Red Teaming (ART), a concept that uses Evolutionary Algorithm (EA), Parallel Computing and Simulation to complement the manual Red Teaming effort to uncover system vulnerabilities or to find exploitable gaps in military operational concepts. The overall goal is to reduce surprises, improve and ensure the robustness of the Blue ops concepts. The design of key components and techniques that are required to develop an ART framework are described and discussed. An experiment with a military scenario in Urban Operations (UO) was conducted and the results analyzed to demonstrate the capability of the ART framework. Results showed that Red Force survivability can be improved by 27% just by modifying behavioral parameters alone. These findings could be used by Blue Force to refine their tactics and strategy thereby ensuring robustness of plans and higher mission success.) <|cite_end|> used the evolution algorithms as the search algorithm to search for red parameters that result in the ``defeat” of blue then fix the parameter.\\
\textit{Summary:} Several complex attack planning strategies have been proposed, however, only a few have been used for automating the RT agent, while others have been used in isolation (without a defined framework) and not in a real network environment. <|paper_end|> | [
"<|reference_start|> 22nd Annual Computer Security Applications Conference (ACSAC 2006), 11-15 December 2006, Miami Beach, Florida, USA: <|reference_end|>",
"<|reference_start|> Towards scalable security analysis using multi-layered security models: <|reference_end|>",
"<|reference_start|> Knowledge-based Decision Making for Simulating Cyber Attack Behaviors: Knowledge-based Decision Making for Simulating Cyber Attack Behaviors Stephen Frank Moskal Supervising Professor: Dr. Shanchieh Jay Yang Computer networks are becoming more complex as the reliance on these network increases in this era of exponential technological growth. This makes the potential gains for criminal activity on these networks extremely serious and can not only devastate organizations or enterprises but also the general population. As complexity of the network increases so does the difficulty to protect the networks as more potential vulnerabilities are introduced. Despite best efforts, traditional defenses like Intrusion Detection Systems and penetration tests are rendered ineffective to even amateur cyber adversaries. Networks now need to be analyzed at all times to preemptively detect weaknesses which harbored a new research field called Cyber Threat Analytics. However, current techniques for cyber threat analytics typically perform static analysis on the network and system vulnerabilities but few address the most variable and most critical piece of the puzzle – the attacker themselves. This work focuses on defining a baseline framework for modeling a wide variety of cyber attack behaviors which can be used in conjunction with a cyber attack simulator to analyze the effects of individual or multiple attackers on a network. To model a cyber attacker’s behaviors with reasonable accuracy and flexibility, the model must be based on aspects of an attacker that are used in real scenarios. Real cyber attackers base their decisions on what they know and learn about the network, vulnerabilities, and targets. This attacker behavior model introduces the aspect of knowledge-based decision making to cyber attack behavior modeling with the goal of providing user configurable options. This behavior model employs Cyber Attack Kill Chain R ©along with an ensemble of the attacker <|reference_end|>",
"<|reference_start|> Penetration testing== pomdp solving?: Penetration Testing is a methodology for assessing network security, by generating and executing possible attacks. Doing so automatically allows for regular and systematic testing without a prohibitive amount of human labor. A key question then is how to generate the attacks. This is naturally formulated as a planning problem. Previous work (Lucangeli et al. 2010) used classical planning and hence ignores all the incomplete knowledge that characterizes hacking. More recent work (Sarraute et al. 2011) makes strong independence assumptions for the sake of scaling, and lacks a clear formal concept of what the attack planning problem actually is. Herein, we model that problem in terms of partially observable Markov decision processes (POMDP). This grounds penetration testing in a well-researched formalism, highlighting important aspects of this problem's nature. POMDPs allow to model information gathering as an integral part of the problem, thus providing for the first time a means to intelligently mix scanning actions with actual exploits. <|reference_end|>"
] | [
3,
7,
11,
39
] | {"<|cite_2|>": "ss-2202274", "<|cite_3|>": "ss-1099539", "<|cite_4|>": "ss-761622", "<|cite_5|>": "ss-1978712", "<|cite_6|>": "ss-1409114", "<|multi_cite_7_1|>": "ss-1624084", "<|multi_cite_7_2|>": "ss-1544213", "<|cite_8|>": "ss-1544213", "<|multi_cite_9_1|>": "ss-2202275", "<|multi_cite_9_2|>": "ss-840255", "<|multi_cite_9_3|>": "ss-1123437", "<|cite_10|>": "ss-1805403", "<|cite_11|>": "ss-2202276", "<|cite_12|>": "ss-1667908", "<|cite_13|>": "ss-1087038", "<|multi_cite_14_1|>": "arxiv-47071", "<|multi_cite_14_3|>": "arxiv-47070", "<|cite_15|>": "ss-2202277", "<|multi_cite_16_1|>": "ss-1624084", "<|multi_cite_16_2|>": "ss-1544213", "<|cite_17|>": "ss-1678167", "<|cite_18|>": "ss-898042", "<|cite_19|>": "ss-2202275", "<|cite_20|>": "ss-840255", "<|cite_21|>": "ss-1123437", "<|cite_22|>": "ss-1805403", "<|cite_23|>": "ss-826368", "<|cite_24|>": "ss-2202278", "<|cite_25|>": "ss-2202276", "<|cite_26|>": "ss-2147920", "<|cite_27|>": "ss-2202279", "<|cite_28|>": "ss-2202280", "<|cite_29|>": "ss-2202279", "<|cite_30|>": "ss-1087038", "<|cite_31|>": "ss-1667908", "<|cite_32|>": "ss-1168962", "<|cite_33|>": "ss-1691820", "<|cite_34|>": "arxiv-47071", "<|cite_36|>": "arxiv-47070", "<|cite_37|>": "ss-1239084", "<|cite_38|>": "ss-2147920", "<|cite_39|>": "ss-1809074", "<|cite_40|>": "ss-1701366", "<|cite_41|>": "ss-1326498", "<|cite_42|>": "ss-2202281", "<|cite_43|>": "ss-1192906", "<|cite_44|>": "ss-1525583", "<|cite_45|>": "ss-2202279"} |
1911.10765-1 | looked at the \emph{weighted} matroid intersection problem, and gave
polynomial time algorithms.
For certain special matroids faster algorithms are known. Indeed,
when the matroid is given explicitly, one can talk of pure running
time instead of oracle queries. For instance, for exact maximum cardinality
bipartite matching, the best running time is the $O(m\sqrt{n})$-time
algorithm due to <|cite_start|> (Reference: A n^5/2 Algorithm for Maximum Matchings in Bipartite Graphs: The present paper shows how to construct a maximum matching in a bipartite graph with n vertices and m edges in a number of computation steps proportional to $(m + n)\sqrt n $.) <|cite_end|>and $\tilde{O}(m^{10/7})$-time
algorithm due to M\k{a}dry <|cite_start|> (Reference: Navigating Central Path with Electrical Flows: from Flows to Matchings, and Back: We present an $\tilde{O}(m^{10/7})=\tilde{O}(m^{1.43})$-time algorithm for the maximum s-t flow and the minimum s-t cut problems in directed graphs with unit capacities. This is the first improvement over the sparse-graph case of the long-standing $O(m \min(\sqrt{m},n^{2/3}))$ time bound due to Even and Tarjan [EvenT75]. By well-known reductions, this also establishes an $\tilde{O}(m^{10/7})$-time algorithm for the maximum-cardinality bipartite matching problem. That, in turn, gives an improvement over the celebrated celebrated $O(m \sqrt{n})$ time bound of Hopcroft and Karp [HK73] whenever the input graph is sufficiently sparse.) <|cite_end|>. Here $m$ is the number of
edges in the graph (and so, the number of elements in the matroid),
while $n$ is the number of vertices which is the rank of the matroid.
It is instructive to compare what our results give: we give an $\tilde{O}(m\sqrt{n}\cdot\Trank)$
and $\tilde{O}(mn\cdot\Tind)$ time algorithm. \textbf{}In dense
graphs, the best algorithm is an $O(n^{\omega})$-running time algorithm
by <|cite_start|> (Reference: Maximum matchings via Gaussian elimination: We present randomized algorithms for finding maximum matchings in general and bipartite graphs. Both algorithms have running time O(n/sup w/), where w is the exponent of the best known matrix multiplication algorithm. Since w < 2.38, these algorithms break through the O(n/sup 2.5/) barrier for the matching problem. They both have a very simple implementation in time O(n/sup 3/) and the only non-trivial element of the O(n/sup w/) bipartite matching algorithm is the fast matrix multiplication algorithm. Our results resolve a long-standing open question of whether Lovasz's randomized technique of testing graphs for perfect matching in time O(n/sup w/) can be extended to an algorithm that actually constructs a perfect matching.) <|cite_end|> <|cite_start|> (Reference: Algebraic algorithms for matching and matroid problems: We present new algebraic approaches for two well-known combinatorial problems: nonbipartite matching and matroid intersection. Our work yields new randomized algorithms that exceed or match the efficiency of existing algorithms. For nonbipartite matching, we obtain a simple, purely algebraic algorithm with running time $O(n^\omega)$ where $n$ is the number of vertices and $\omega$ is the matrix multiplication exponent. This resolves the central open problem of Mucha and Sankowski (2004). For matroid intersection, our algorithm has running time $O(nr^{\omega-1})$ for matroids with $n$ elements and rank $r$ that satisfy some natural conditions.) <|cite_end|>, where $\omega$ is the exponent of matrix
multiplication. For linear matroids, the matroid intersection problem
can be solved in essentially $O(nr^{\omega-1})$-time <|cite_start|> (Reference: Algebraic algorithms for matching and matroid problems: We present new algebraic approaches for two well-known combinatorial problems: nonbipartite matching and matroid intersection. Our work yields new randomized algorithms that exceed or match the efficiency of existing algorithms. For nonbipartite matching, we obtain a simple, purely algebraic algorithm with running time $O(n^\omega)$ where $n$ is the number of vertices and $\omega$ is the matrix multiplication exponent. This resolves the central open problem of Mucha and Sankowski (2004). For matroid intersection, our algorithm has running time $O(nr^{\omega-1})$ for matroids with $n$ elements and rank $r$ that satisfy some natural conditions.) <|cite_end|> <|cite_start|> (Reference: Algebraic algorithms for linear matroid parity problems: We present fast and simple algebraic algorithms for the linear matroid parity problem and its applications. For the linear matroid parity problem, we obtain a simple randomized algorithm with running time <i>O</i>(<i>mr</i><sup>ω-1</sup>), where <i>m</i> and <i>r</i> are the number of columns and the number of rows, respectively, and ω ≈ 2.3727 is the matrix multiplication exponent. This improves the <i>O</i>(<i>mr</i><sup>ω</sup>)-time algorithm by Gabow and Stallmann and matches the running time of the algebraic algorithm for linear matroid intersection, answering a question of Harvey. We also present a very simple alternative algorithm with running time <i>O</i>(<i>mr</i><sup>2</sup>), which does not need fast matrix multiplication.
We further improve the algebraic algorithms for some specific graph problems of interest. For the Mader’s disjoint <i>S</i>-path problem, we present an <i>O</i>(<i>n</i><sup>ω</sup>)-time randomized algorithm where <i>n</i> is the number of vertices. This improves the running time of the existing results considerably and matches the running time of the algebraic algorithms for graph matching. For the graphic matroid parity problem, we give an <i>O</i>(<i>n</i><sup>4</sup>)-time randomized algorithm where <i>n</i> is the number of vertices, and an <i>O</i>(<i>n</i><sup>3</sup>)-time randomized algorithm for a special case useful in designing approximation algorithms. These algorithms are optimal in terms of <i>n</i> as the input size could be Ω (<i>n</i><sup>4</sup>) and Ω (<i>n</i><sup>3</sup>), respectively.
The techniques are based on the algebraic algorithmic framework developed by Mucha and Sankowski, Harvey, and Sankowski. While linear matroid parity and Mader’s disjoint <i>S</i>-path are challenging generalizations for the design of combinatorial algorithms, our results show that both the algebraic algorithms for linear matroid intersection and graph matching can be extended nicely to more general settings. All algorithms are still faster than the existing algorithms even if fast matrix multiplication is not used. These provide simple algorithms that can be easily implemented in practice.) <|cite_end|>.
For graphic matroids, the matroid intersection problem can be solved
in $O(n\sqrt{r}\log r)$ time <|cite_start|> (Reference: Efficient algorithms for independent assignment on graphic and linear matroids: Efficient algorithms are presented for the matroid intersection problem and generalizations. The algorithm for weighted intersection works by scaling the weights. The cardinality algorithm is a special case that takes advantage of greater structure. Efficiency of the algorithms is illustrated by several implementations. On graphic matroids the algorithms run close to the best bounds for trivial matroids (i.e. ordinary bipartite graph matching): O( square root nm log n) for cardinality intersection and O( square root nm log/sup 2/n log(nN)) for weighted intersection (n, m, and N denote the number of vertices, edges, and largest edge weight, respectively; weights are assumed integral). Efficient algorithms are also given for linear matroids. These include both algorithms that are practical and algorithms exploiting fast matrix multiplication.<<ETX>>) <|cite_end|> <|cite_start|> (Reference: Efficient theoretic and practical algorithms for linear matroid intersection problems: Efficient algorithms for the matroid intersection problem, both cardinality and weighted versions, are presented. The algorithm for weighted intersection works by scaling the weights. The cardinality algorithm is a special case, but takes advantage of greater structure. Efficiency of the algorithms is illustrated by several implementations on linear matroids. Consider a linear matroid withmelements and rankn. Assume all element weights are integers of magnitude at mostN. Our fastest algorithms use timeO(mn1.77log(nN)) andO(mn1.62) for weighted and unweighted intersection, respectively; this improves the previous best bounds,O(mn2.4) andO(mn2logn), respectively. Corresponding improvements are given for several applications of matroid intersection to numerical computation and dynamic systems.) <|cite_end|>.
The study of approximate matroid intersection problems is newer. As
Chekuri and Quanrud note, Cunningham's
analysis implies a $O(nr/\eps\cdot\Tind)$-time algorithm to get an
$(1-\eps)$-approximate matroid intersection. Huang et al. <|cite_start|> (Reference: Exact and approximation algorithms for weighted matroid intersection: ) <|cite_end|>and Chekuri and Quanrud study the approximate
\emph{weighted} version. The former gives an $\tilde{O}(nr^{1.5}/\eps\cdot\Tind)$-time
approximation algorithm <|cite_start|> (Reference: Exact and approximation algorithms for weighted matroid intersection: ) <|cite_end|>, while the latter gives an
$O(nr\log^{2}(1/\eps)/\eps^{2}\cdot\Tind)$-time approximation algorithm.
Contrast this with our $\tilde{O}(nr\cdot\Tind)$-time exact and $\tilde{O}(n^{1.5}/\eps^{1.5})$-time
approximate algorithm, albeit for the unweighted version. Finally,
Guruganesh and Singla <|cite_start|> (Reference: Online Matroid Intersection: Beating Half for Random Arrival: For two matroids $\mathcal{M}_1$ and $\mathcal{M}_2$ defined on the same ground set $E$, the online matroid intersection problem is to design an algorithm that constructs a large common independent set in an online fashion. The algorithm is presented with the ground set elements one-by-one in a uniformly random order. At each step, the algorithm must irrevocably decide whether to pick the element, while always maintaining a common independent set. While the natural greedy algorithm---pick an element whenever possible---is half competitive, nothing better was previously known; even for the special case of online bipartite matching in the edge arrival model. We present the first randomized online algorithm that has a $\frac12 + \delta$ competitive ratio in expectation, where $\delta >0$ is a constant. The expectation is over the random order and the coin tosses of the algorithm. As a corollary, we also obtain the first linear time algorithm that beats half competitiveness for offline matroid intersection.) <|cite_end|>give an $\frac{1}{2}+\delta$-approximation
algorithm for a small but fixed constant $\delta$ which runs in $O(n\cdot\Tind)$-time.
We end the introduction with (to our knowledge) the only
known \emph{lower bound} for matroid intersection. Since we are in
the oracle (rank/independence) model, it is perhaps foreseeable that
some non-trivial information theoretic lower bound can be attained
for the number of queries required for matroid intersection. Unfortunately,
the only lower bound we are aware of is due to Harvey <|cite_start|> (Reference: Query lower bounds for matroid intersection (combinatorial optimization and discrete algorithms): We consider the number of queries needed to solve the matroid intersection problem, a question raised by Welsh (1976). Given two matroids of rank r on n elements, it is known that O(nr 1:5 ) independence queries suffice. Unfortunately, very little is known about lower bounds for this problem. This paper describes three lower bounds which, to our knowledge, are the best known: 2n − 2 queries are needed for rank 1 matroids, n queries are needed for rank n − 1 matroids, and (log 2 3)n − o(n) queries are needed for matroids of rank n=2. The first two results are elementary, and the last uses methods from communication complexity and group representation theory.) <|cite_end|>.
For matroids with $r=n/2$, Harvey <|cite_start|> (Reference: Query lower bounds for matroid intersection (combinatorial optimization and discrete algorithms): We consider the number of queries needed to solve the matroid intersection problem, a question raised by Welsh (1976). Given two matroids of rank r on n elements, it is known that O(nr 1:5 ) independence queries suffice. Unfortunately, very little is known about lower bounds for this problem. This paper describes three lower bounds which, to our knowledge, are the best known: 2n − 2 queries are needed for rank 1 matroids, n queries are needed for rank n − 1 matroids, and (log 2 3)n − o(n) queries are needed for matroids of rank n=2. The first two results are elementary, and the last uses methods from communication complexity and group representation theory.) <|cite_end|>shows a lower bound
of $(\log_{2}3)n-o(n)$ queries. Obtaining an $\omega(n)$ lower bound
is a challenging open problem. <|paper_end|> | [
"<|reference_start|> Algebraic algorithms for linear matroid parity problems: We present fast and simple algebraic algorithms for the linear matroid parity problem and its applications. For the linear matroid parity problem, we obtain a simple randomized algorithm with running time <i>O</i>(<i>mr</i><sup>ω-1</sup>), where <i>m</i> and <i>r</i> are the number of columns and the number of rows, respectively, and ω ≈ 2.3727 is the matrix multiplication exponent. This improves the <i>O</i>(<i>mr</i><sup>ω</sup>)-time algorithm by Gabow and Stallmann and matches the running time of the algebraic algorithm for linear matroid intersection, answering a question of Harvey. We also present a very simple alternative algorithm with running time <i>O</i>(<i>mr</i><sup>2</sup>), which does not need fast matrix multiplication.\n We further improve the algebraic algorithms for some specific graph problems of interest. For the Mader’s disjoint <i>S</i>-path problem, we present an <i>O</i>(<i>n</i><sup>ω</sup>)-time randomized algorithm where <i>n</i> is the number of vertices. This improves the running time of the existing results considerably and matches the running time of the algebraic algorithms for graph matching. For the graphic matroid parity problem, we give an <i>O</i>(<i>n</i><sup>4</sup>)-time randomized algorithm where <i>n</i> is the number of vertices, and an <i>O</i>(<i>n</i><sup>3</sup>)-time randomized algorithm for a special case useful in designing approximation algorithms. These algorithms are optimal in terms of <i>n</i> as the input size could be Ω (<i>n</i><sup>4</sup>) and Ω (<i>n</i><sup>3</sup>), respectively.\n The techniques are based on the algebraic algorithmic framework developed by Mucha and Sankowski, Harvey, and Sankowski. While linear matroid parity and Mader’s disjoint <i>S</i>-path are challenging generalizations for the design of combinatorial algorithms, our results show that both the algebraic algorithms for linear matroid intersection and graph matching can be extended nicely to more general settings. All algorithms are still faster than the existing algorithms even if fast matrix multiplication is not used. These provide simple algorithms that can be easily implemented in practice. <|reference_end|>",
"<|reference_start|> Efficient algorithms for independent assignment on graphic and linear matroids: Efficient algorithms are presented for the matroid intersection problem and generalizations. The algorithm for weighted intersection works by scaling the weights. The cardinality algorithm is a special case that takes advantage of greater structure. Efficiency of the algorithms is illustrated by several implementations. On graphic matroids the algorithms run close to the best bounds for trivial matroids (i.e. ordinary bipartite graph matching): O( square root nm log n) for cardinality intersection and O( square root nm log/sup 2/n log(nN)) for weighted intersection (n, m, and N denote the number of vertices, edges, and largest edge weight, respectively; weights are assumed integral). Efficient algorithms are also given for linear matroids. These include both algorithms that are practical and algorithms exploiting fast matrix multiplication.<<ETX>> <|reference_end|>",
"<|reference_start|> Exact and approximation algorithms for weighted matroid intersection: <|reference_end|>",
"<|reference_start|> Query lower bounds for matroid intersection (combinatorial optimization and discrete algorithms): We consider the number of queries needed to solve the matroid intersection problem, a question raised by Welsh (1976). Given two matroids of rank r on n elements, it is known that O(nr 1:5 ) independence queries suffice. Unfortunately, very little is known about lower bounds for this problem. This paper describes three lower bounds which, to our knowledge, are the best known: 2n − 2 queries are needed for rank 1 matroids, n queries are needed for rank n − 1 matroids, and (log 2 3)n − o(n) queries are needed for matroids of rank n=2. The first two results are elementary, and the last uses methods from communication complexity and group representation theory. <|reference_end|>"
] | [
5,
6,
8,
11
] | {"<|multi_cite_1_1|>": "ss-1278651", "<|multi_cite_1_2|>": "ss-2186624", "<|multi_cite_2_1|>": "ss-1687427", "<|multi_cite_2_2|>": "arxiv-5046", "<|multi_cite_2_3|>": "ss-1518275", "<|cite_3|>": "ss-1111937", "<|multi_cite_4_1|>": "ss-915257", "<|multi_cite_4_2|>": "ss-1833958", "<|multi_cite_4_3|>": "ss-730910", "<|multi_cite_4_4|>": "ss-2186625", "<|multi_cite_4_5|>": "ss-1550929", "<|multi_cite_4_6|>": "arxiv-82753", "<|cite_5|>": "ss-730910", "<|cite_6|>": "arxiv-82753", "<|cite_7|>": "arxiv-198912", "<|multi_cite_8_1|>": "arxiv-16685", "<|multi_cite_8_2|>": "ss-1146101", "<|multi_cite_8_3|>": "arxiv-44145", "<|multi_cite_8_4|>": "arxiv-44186", "<|multi_cite_8_7|>": "ss-1278156", "<|cite_9|>": "ss-730910", "<|cite_11|>": "arxiv-82753", "<|cite_12|>": "ss-1111937", "<|cite_13|>": "arxiv-82753", "<|cite_14|>": "ss-1111937", "<|cite_15|>": "ss-730910", "<|multi_cite_16_1|>": "ss-1111937", "<|multi_cite_16_2|>": "ss-915257", "<|multi_cite_16_3|>": "ss-1833958", "<|cite_17|>": "ss-730910", "<|cite_18|>": "ss-951511", "<|cite_19|>": "ss-951511", "<|cite_20|>": "ss-730910", "<|cite_21|>": "ss-951511", "<|cite_22|>": "ss-730910", "<|cite_23|>": "ss-730910", "<|cite_24|>": "ss-951511", "<|cite_25|>": "ss-1267326", "<|multi_cite_26_1|>": "ss-1111937", "<|multi_cite_26_2|>": "ss-915257", "<|multi_cite_26_3|>": "ss-1833958", "<|multi_cite_27_1|>": "ss-1833958", "<|multi_cite_27_2|>": "ss-2298357", "<|multi_cite_27_3|>": "ss-2045914", "<|multi_cite_27_4|>": "ss-2186625", "<|multi_cite_27_5|>": "ss-2470778", "<|multi_cite_27_6|>": "ss-1550929", "<|multi_cite_27_7|>": "arxiv-82753", "<|multi_cite_27_8|>": "ss-1278156", "<|cite_28|>": "ss-951511", "<|cite_29|>": "arxiv-47790", "<|multi_cite_30_1|>": "ss-855585", "<|multi_cite_30_2|>": "ss-1690777", "<|multi_cite_31_1|>": "ss-1690777", "<|multi_cite_31_2|>": "ss-1449535", "<|multi_cite_32_1|>": "ss-2186625", "<|multi_cite_32_2|>": "ss-2470778", "<|cite_34|>": "ss-1278156", "<|cite_36|>": "ss-1278156", "<|cite_37|>": "arxiv-89320", "<|multi_cite_38_1|>": "ss-2186626", "<|cite_39|>": "ss-2186626"} |
1902.04111 | <|paper_start|> Title: Statistical Model Checking for Hyperproperties
Abstract: Statistical Model Checking for Hyperproperties: Hyperproperties have shown to be a powerful tool for expressing and reasoning about information-flow security policies. In this paper, we investigate the problem of statistical model checking (SMC) for hyperproperties. Unlike exhaustive model checking, SMC works based on drawing samples from the system at hand and evaluate the specification with statistical confidence. The main benefit of applying SMC over exhaustive techniques is its efficiency and scalability. To reason about probabilistic hyperproperties, we first propose the temporal logic HyperPCLT* that extends PCTL* and HyperPCTL. We show that HyperPCLT* can express important probabilistic information-flow security policies that cannot be expressed with HyperPCTL. Then, we introduce SMC algorithms for verifying HyperPCLT* formulas on discrete-time Markov chains, based on sequential probability ratio tests (SPRT) with a new notion of multi-dimensional indifference region. Our SMC algorithms can handle both non-nested and nested probability operators for any desired significance level. To show the effectiveness of our technique, we evaluate our SMC algorithms on four case studies focused on information security: timing side-channel vulnerability in encryption, probabilistic anonymity in dining cryptographers, probabilistic noninterference of parallel programs, and the performance of a randomized cache replacement policy that acts as a countermeasure against cache flush attacks.
Introduction
\label{sec:intro}
{\em Randomization} has been a powerful tool in the design and development of
many algorithms and protocols that make probabilistic guarantees in the area of
information security. Prominent examples such as {\em quantitative information
flow} <|cite_start|> (Reference: An Information-theoretic Model for Adaptive Side-channel Attacks: We present a model of adaptive side-channel attacks which we combine with information-theoretic metrics to quantify the information revealed to an attacker. This allows us to express an attacker's remaining uncertainty about a secret as a function of the number of side-channel measurements made. We present algorithms and approximation techniques for computing this measure. We also give examples of how they can be used to analyze the resistance of hardware implementations of cryptographic functions to both timing and power attacks.) <|cite_end|> <|cite_start|> (Reference: On the Foundations of Quantitative Information Flow: ) <|cite_end|>, {\em probabilistic noninterference} <|cite_start|> (Reference: Toward a Mathematical Foundation for Information Flow Security: A general-purpose, probabilistic state machine model which can be used to model a large class of nondeterministic (as well as deterministic) computer systems is described. The necessary probability theory to rigorously state and prove probabilistic properties of modeled systems is developed. A definition of information flow-security making use of this formalism is given. Intuitively, information flow security is the aspect of computer security concerned with how information is permitted to flow through a computer system. It is proved that the proposed definition of information flow security implies an information-theoretic definition. Finally, the author gives a verification condition for information flow security and proves that it implies the proposed definition of information flow security.<<ETX>>) <|cite_end|>, and
{\em differential privacy} <|cite_start|> (Reference: The algorithmic foundations of Differential Privacy: The problem of privacy-preserving data analysis has a long history spanning multiple disciplines. As electronic data about individuals becomes increasingly detailed, and as technology enables ever more powerful collection and curation of these data, the need increases for a robust, meaningful, and mathematically rigorous definition of privacy, together with a computationally rich class of algorithms that satisfy this definition. Differential Privacy is such a definition.After motivating and discussing the meaning of differential privacy, the preponderance of this monograph is devoted to fundamental techniques for achieving differential privacy, and application of these techniques in creative combinations, using the query-release problem as an ongoing example. A key point is that, by rethinking the computational goal, one can often obtain far better results than would be achieved by methodically replacing each step of a non-private computation with a differentially private implementation. Despite some astonishingly powerful computational results, there are still fundamental limitations — not just on what can be achieved with differential privacy but on what can be achieved with any method that protects against a complete breakdown in privacy. Virtually all the algorithms discussed herein maintain differential privacy against adversaries of arbitrary computational power. Certain algorithms are computationally intensive, others are efficient. Computational complexity for the adversary and the algorithm are both discussed.We then turn from fundamentals to applications other than queryrelease, discussing differentially private methods for mechanism design and machine learning. The vast majority of the literature on differentially private algorithms considers a single, static, database that is subject to many analyses. Differential privacy in other models, including distributed databases and computations on data streams is discussed.Finally, we note that this work is meant as a thorough introduction to the problems and techniques of differential privacy, but is not intended to be an exhaustive survey — there is by now a vast amount of work in differential privacy, and we can cover only a small portion of it.) <|cite_end|> quantify the amount of information
leakage and the relation between
\yw{two probabilistic execution traces} of a system. These
and similar requirements constitute {\em probabilistic
hyperproperties} <|cite_start|> (Reference: {Hyperproperties: Properties, which have long been used for reasoning about systems, are sets of traces. Hyperproperties, introduced here, are sets of properties. Hyperproperties can express security policies, such as secure information flow, that properties cannot. Safety and liveness are generalized to hyperproperties, and every hyperproperty is shown to be the intersection of a safety hyperproperty and a liveness hyperproperty. A verification technique for safety hyperproperties is given and is shown to generalize prior techniques for verifying secure information flow. Refinement is shown to be valid for safety hyperproperties. A topological characterization of hyperproperties is given.) <|cite_end|> <|cite_start|> (Reference: HyperPCTL: A Temporal Logic for Probabilistic Hyperproperties: In this paper, we propose a new logic for expressing and reasoning about probabilistic hyperproperties. Hyperproperties characterize the relation between different independent executions of a system. Probabilistic hyperproperties express quantitative dependencies between such executions. The standard temporal logics for probabilistic systems, i.e., PCTL and PCTL* can refer only to a single path at a time and, hence, cannot express many probabilistic hyperproperties of interest. The logic proposed in this paper, \HyperPCTL, adds explicit and simultaneous quantification over multiple traces to PCTL. Such quantification allows expressing probabilistic hyperproperties. A model checking algorithm for the proposed logic is also given for discrete-time Markov chains.) <|cite_end|>. They extend traditional trace properties from
sets of execution traces to sets of execution traces and allow for
explicit and simultaneous quantification over the temporal behavior of multiple
execution traces. Probabilistic hyperproperties stipulate the probability
relation between independent executions.
{\em Model checking}, an automated technique that verifies the correctness of
a system with respect to a formal specification, has arguably been the most
successful story of using formal methods in the past three decades. Since many
systems have stochastic nature (e.g., randomized distributed algorithms), model
checking of such systems has been an active area of research. Temporal logics such as \pctls <|cite_start|> (Reference: {Principles of model checking: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.) <|cite_end|> as well as model checkers \comp{PRISM} <|cite_start|> (Reference: PRISM 4.0: Verification of Probabilistic Real-Time Systems: ) <|cite_end|> and \comp{STORM} <|cite_start|> (Reference: A storm is Coming: A Modern Probabilistic Model Checker: We launch the new probabilistic model checker storm. It features the analysis of discrete- and continuous-time variants of both Markov chains and MDPs. It supports the PRISM and JANI modeling languages, probabilistic programs, dynamic fault trees and generalized stochastic Petri nets. It has a modular set-up in which solvers and symbolic engines can easily be exchanged. It offers a Python API for rapid prototyping by encapsulating storm's fast and scalable algorithms. Experiments on a variety of benchmarks show its competitive performance.) <|cite_end|>
have been developed as formalism and tools to express and reason about probabilistic
systems. However, these techniques are unable to capture and verify
probabilistic hyperproperties that are vital to reason about quantified
information-flow security.
The state of the art in specification and verification of probabilistic
hyperproperties is limited to the temporal logic \hpctl <|cite_start|> (Reference: HyperPCTL: A Temporal Logic for Probabilistic Hyperproperties: In this paper, we propose a new logic for expressing and reasoning about probabilistic hyperproperties. Hyperproperties characterize the relation between different independent executions of a system. Probabilistic hyperproperties express quantitative dependencies between such executions. The standard temporal logics for probabilistic systems, i.e., PCTL and PCTL* can refer only to a single path at a time and, hence, cannot express many probabilistic hyperproperties of interest. The logic proposed in this paper, \HyperPCTL, adds explicit and simultaneous quantification over multiple traces to PCTL. Such quantification allows expressing probabilistic hyperproperties. A model checking algorithm for the proposed logic is also given for discrete-time Markov chains.) <|cite_end|>. The model
checking algorithm for \hpctl utilizes a numerical approach that iteratively
computes the exact measure of paths satisfying relevant sub-formulas. In this
context, we currently face two significant and orthogonal gaps to apply
verification of probabilistic hyperproperties in~practice:
\begin{itemize}
\item {\bf Expressiveness.} \ First, \hpctl does {\em not} allow (1)~nesting of
temporal operators, which is necessary to express requirements such as performance guarantees in randomized cache replacement protocols that defend
against cache-flush attacks, and (2)~explicit quantification over execution paths, which is necessary to reason about the probability of reaching certain~states.
\item {\bf Scalability.} \ Second, and perhaps more importantly, numerical algorithms for probabilistic model checking, including the one proposed in <|cite_start|> (Reference: HyperPCTL: A Temporal Logic for Probabilistic Hyperproperties: In this paper, we propose a new logic for expressing and reasoning about probabilistic hyperproperties. Hyperproperties characterize the relation between different independent executions of a system. Probabilistic hyperproperties express quantitative dependencies between such executions. The standard temporal logics for probabilistic systems, i.e., PCTL and PCTL* can refer only to a single path at a time and, hence, cannot express many probabilistic hyperproperties of interest. The logic proposed in this paper, \HyperPCTL, adds explicit and simultaneous quantification over multiple traces to PCTL. Such quantification allows expressing probabilistic hyperproperties. A model checking algorithm for the proposed logic is also given for discrete-time Markov chains.) <|cite_end|>, tend to require substantial time and space, and often run into
serious scalability issues. Indeed, these algorithms work only for small systems
that have certain structural properties. On top of this difficulty, another major challenge in verifying hyperproperties is that the computation
complexity for exhaustive verification grows at least exponentially in the
number of quantifiers of the input
formula <|cite_start|> (Reference: HyperPCTL: A Temporal Logic for Probabilistic Hyperproperties: In this paper, we propose a new logic for expressing and reasoning about probabilistic hyperproperties. Hyperproperties characterize the relation between different independent executions of a system. Probabilistic hyperproperties express quantitative dependencies between such executions. The standard temporal logics for probabilistic systems, i.e., PCTL and PCTL* can refer only to a single path at a time and, hence, cannot express many probabilistic hyperproperties of interest. The logic proposed in this paper, \HyperPCTL, adds explicit and simultaneous quantification over multiple traces to PCTL. Such quantification allows expressing probabilistic hyperproperties. A model checking algorithm for the proposed logic is also given for discrete-time Markov chains.) <|cite_end|> <|cite_start|> (Reference: Runtime verification of k-safety hyperproperties in hyperltl: This paper introduces a novel runtime verification technique for a rich sub-class of Clarkson and Schneider's hyperproperties. The primary application of such properties is in expressing security policies (e.g., information flow) that cannot be expressed in trace-based specification languages (e.g., LTL). First, to incorporate syntactic means, we draw connections between safety and co-safety hyperproperties and the temporal logic HYPERLTL, which allows explicit quantification over multiple executions. We also define the notion of monitorability in HYPERLTL and identify classes of monitorable HYPERLTL formulas. Then, we introduce an algorithm for monitoring k-safety and co-k-safety hyperproperties expressed in HYPERLTL. Our technique is based on runtime formula progression as well as on-the-fly monitor synthesis across multiple executions. We analyze different performance aspects of our technique by conducting thorough experiments on monitoring security policies for information flow and observational determinism on a real-world location-based service dataset as well as synthetic trace sets.) <|cite_end|> <|cite_start|> (Reference: Temporal Logics for Hyperproperties: Two new logics for verification of hyperproperties are proposed. Hyperproperties characterize security policies, such as noninterference, as a property of sets of computation paths. Standard temporal logics such as LTL, CTL, and CTL* can refer only to a single path at a time, hence cannot express many hyperproperties of interest. The logics proposed here, HyperLTL and HyperCTL*, add explicit and simultaneous quantification over multiple paths to LTL and to CTL*. This kind of quantification enables expression of hyperproperties. A model checking algorithm for the proposed logics is given. For a fragment of HyperLTL, a prototype model checker has been implemented.) <|cite_end|> <|cite_start|> (Reference: The Complexity of Monitoring Hyperproperties: We study the runtime verification of hyperproperties, expressed in the temporal logic HyperLTL, as a means to inspect a system with respect to security polices. Runtime monitors for hyperproperties analyze trace logs that are organized by common prefixes in the form of a tree-shaped Kripke structure, or are organized both by common prefixes and by common suffixes in the form of an acyclic Kripke structure. Unlike runtime verification techniques for trace properties, where the monitor tracks the state of the specification but usually does not need to store traces, a monitor for hyperproperties repeatedly model checks the growing Kripke structure. This calls for a rigorous complexity analysis of the model checking problem over tree-shaped and acyclic Kripke structures. We show that for trees, the complexity in the size of the Kripke structure is L-complete independently of the number of quantifier alternations in the HyperLTL formula. For acyclic Kripke structures, the complexity is PSPACE-complete (in the level of the polynomial hierarchy that corresponds to the number of quantifier alternations). The combined complexity in the size of the Kripke structure and the length of the HyperLTL formula is PSPACE-complete for both trees and acyclic Kripke structures, and is as low as NC for the relevant case of trees and alternation-free HyperLTL formulas. Thus, the size and shape of both the Kripke structure and the formula have significant impact on the complexity of the model checking problem.) <|cite_end|>.
\end{itemize}
In this work, our goal is to address the above stumbling blocks
(expressiveness and scalability) by investigating {\em statistical model
checking} (SMC) <|cite_start|> (Reference: A survey of statistical model checking: Interactive, distributed, and embedded systems often behave stochastically, for example, when inputs, message delays, or failures conform to a probability distribution. However, reasoning analytically about the behavior of complex stochastic systems is generally infeasible. While simulations of systems are commonly used in engineering practice, they have not traditionally been used to reason about formal specifications. Statistical model checking (SMC) addresses this weakness by using a simulation-based approach to reason about precise properties specified in a stochastic temporal logic. A specification for a communication system may state that within some time bound, the probability that the number of messages in a queue will be greater than 5 must be less than 0.01. Using SMC, executions of a stochastic system are first sampled, after which statistical techniques are applied to determine whether such a property holds. While the output of sample-based methods are not always correct, statistical inference can quantify the confidence in the result produced. In effect, SMC provides a more widely applicable and scalable alternative to analysis of properties of stochastic systems using numerical and symbolic methods. SMC techniques have been successfully applied to analyze systems with large state spaces in areas such as computer networking, security, and systems biology. In this article, we survey SMC algorithms, techniques, and tools, while emphasizing current limitations and tradeoffs between precision and scalability.) <|cite_end|> <|cite_start|> (Reference: Statistical Model Checking: An Overview: ) <|cite_end|> <|cite_start|> (Reference: Statistical Model Checking: Past, Present, and Future: ) <|cite_end|>
for hyperproperties with probabilistic guarantees.
To this end, we first introduce on discrete-time Markov chains
the temporal logic \hpctls that extends \pctls <|cite_start|> (Reference: {Principles of model checking: Our growing dependence on increasingly complex computer and software systems necessitates the development of formalisms, techniques, and tools for assessing functional properties of these systems. One such technique that has emerged in the last twenty years is model checking, which systematically (and automatically) checks whether a model of a given system satisfies a desired property such as deadlock freedom, invariants, and request-response properties. This automated technique for verification and debugging has developed into a mature and widely used approach with many applications. Principles of Model Checking offers a comprehensive introduction to model checking that is not only a text suitable for classroom use but also a valuable reference for researchers and practitioners in the field. The book begins with the basic principles for modeling concurrent and communicating systems, introduces different classes of properties (including safety and liveness), presents the notion of fairness, and provides automata-based algorithms for these properties. It introduces the temporal logics LTL and CTL, compares them, and covers algorithms for verifying these logics, discussing real-time systems as well as systems subject to random phenomena. Separate chapters treat such efficiency-improving techniques as abstraction and symbolic manipulation. The book includes an extensive set of examples (most of which run through several chapters) and a complete set of basic results accompanied by detailed proofs. Each chapter concludes with a summary, bibliographic notes, and an extensive list of exercises of both practical and theoretical nature.) <|cite_end|> by
(i) allowing explicit quantification over paths,
and \hpctl <|cite_start|> (Reference: HyperPCTL: A Temporal Logic for Probabilistic Hyperproperties: In this paper, we propose a new logic for expressing and reasoning about probabilistic hyperproperties. Hyperproperties characterize the relation between different independent executions of a system. Probabilistic hyperproperties express quantitative dependencies between such executions. The standard temporal logics for probabilistic systems, i.e., PCTL and PCTL* can refer only to a single path at a time and, hence, cannot express many probabilistic hyperproperties of interest. The logic proposed in this paper, \HyperPCTL, adds explicit and simultaneous quantification over multiple traces to PCTL. Such quantification allows expressing probabilistic hyperproperties. A model checking algorithm for the proposed logic is also given for discrete-time Markov chains.) <|cite_end|>
by (ii) allowing nested probability and temporal operators.
These two features are crucial in expressing probabilistic hyperproperties,
such as probabilistic noninterference.
Specifically, consider a probabilistic program with a high-security input $h
\in \{0,1\}$ and a low-security output $l \in \{0,1\}$.
Probabilistic noninterference requires that the probability of observing the low-security output $l=0$ (or $l=1$) should be equal for two executions $\pi_{h=0}$ and $\pi_{h=1}$
that have the high-security input
$h=0$ and $h=1$, respectively. In other words, the high-security input cannot be
inferred from the low-security output through a probabilistic channel -- i.e.,
$$
\P^{\pi_{h=0}}( \pi_{h=0} \text{ outputs } l=0 )
= \P^{\pi_{h=1}}( \pi_{h=1} \text{ outputs } l=0 )
$$
This property involves the relation between two
executions $\pi_{h=0}$ and $\pi_{h=1}$,
and cannot be expressed by non-hyper logics, such as \pctls.
We also illustrate that \hpctls can elegantly express properties such as
generalized probabilistic causation, countermeasures for side-channel attacks,
probabilistic noninterference, and probabilistic independence among executions.
In addition, the latter is an important performance property for cache
replacement policies that defend against cache flush attacks and cannot be
expressed in \hpctl, as it requires using nested temporal operators.
To tackle the scalability problem, we turn to SMC -- a popular
approach in dealing with probabilistic systems that uses a {\em sample-based}
technique, where one asserts whether the system satisfies a property by
observing some of its executions <|cite_start|> (Reference: Statistical Model Checking: An Overview: ) <|cite_end|> <|cite_start|> (Reference: Statistical Model Checking for Biological Applications: In this paper we survey recent work on the use of statistical model checking techniques for biological applications. We begin with an overview of the basic modelling techniques for biochemical reactions and their corresponding stochastic simulation algorithm - the Gillespie algorithm. We continue by giving a brief description of the relation between stochastic models and continuous (ordinary differential equation) models. Next we present a literature survey, divided in two general areas. In the first area we focus on works addressing verification of biological models, while in the second area we focus on papers tackling the parameter synthesis problem. We conclude with some open problems and directions for further research.) <|cite_end|> <|cite_start|> (Reference: Ymer: A Statistical Model Checker: ) <|cite_end|> <|cite_start|> (Reference: Statistical verification of the Toyota powertrain control verification benchmark: The Toyota Powertrain Control Verification Benchmark has been recently proposed as challenge problems that capture features of realistic automotive designs. In this paper we statistically verify the most complicated of the powertrain control models proposed, that includes features like delayed differential and difference equations, look-up tables, and highly non-linear dynamics, by simulating the C++ code generated from the SimulinkTM model of the design. Our results show that for at least 98% of the possible initial operating conditions the desired properties hold. These are the first verification results for this model, statistical or otherwise.) <|cite_end|> <|cite_start|> (Reference: Statistical verification of learning-based cyber-physical systems: The use of Neural Network (NN)-based controllers has attracted significant attention in recent years. Yet, due to the complexity and non-linearity of such NN-based cyber-physical systems (CPS), existing verification techniques that employ exhaustive state-space search, face significant scalability challenges; this effectively limits their use for analysis of real-world CPS. In this work, we focus on the use of Statistical Model Checking (SMC) for verifying complex NN-controlled CPS. Using an SMC approach based on Clopper-Pearson confidence levels, we verify from samples specifications that are captured by Signal Temporal Logic (STL) formulas. Specifically, we consider three CPS benchmarks with varying levels of plant and controller complexity, as well as the type of considered STL properties - reachability property for a mountain car, safety property for a bipedal robot, and control performance of the closed-loop magnet levitation system. On these benchmarks, we show that SMC methods can be successfully used to provide high-assurance for learning-based CPS.) <|cite_end|>.
The general idea of SMC is to treat the problem of checking a temporal logic
formula on a probabilistic system as {\em hypothesis
testing} <|cite_start|> (Reference: On Statistical Model Checking of Stochastic Systems: ) <|cite_end|> <|cite_start|> (Reference: A survey of statistical model checking: Interactive, distributed, and embedded systems often behave stochastically, for example, when inputs, message delays, or failures conform to a probability distribution. However, reasoning analytically about the behavior of complex stochastic systems is generally infeasible. While simulations of systems are commonly used in engineering practice, they have not traditionally been used to reason about formal specifications. Statistical model checking (SMC) addresses this weakness by using a simulation-based approach to reason about precise properties specified in a stochastic temporal logic. A specification for a communication system may state that within some time bound, the probability that the number of messages in a queue will be greater than 5 must be less than 0.01. Using SMC, executions of a stochastic system are first sampled, after which statistical techniques are applied to determine whether such a property holds. While the output of sample-based methods are not always correct, statistical inference can quantify the confidence in the result produced. In effect, SMC provides a more widely applicable and scalable alternative to analysis of properties of stochastic systems using numerical and symbolic methods. SMC techniques have been successfully applied to analyze systems with large state spaces in areas such as computer networking, security, and systems biology. In this article, we survey SMC algorithms, techniques, and tools, while emphasizing current limitations and tradeoffs between precision and scalability.) <|cite_end|>. By drawing samples from the underlying
probabilistic system, the satisfaction of the formula can be inferred
with high confidence levels. To the best of our knowledge, the work on SMC for
hyperproperties is limited to <|cite_start|> (Reference: Statistical verification of hyperproperties for cyber-physical systems: Many important properties of cyber-physical systems (CPS) are defined upon the relationship between multiple executions simultaneously in continuous time. Examples include probabilistic fairness and sensitivity to modeling errors (i.e., parameters changes) for real-valued signals. These requirements can only be specified by hyperproperties. In this article, we focus on verifying probabilistic hyperproperties for CPS. To cover a wide range of modeling formalisms, we first propose a general model of probabilistic uncertain systems (PUSs) that unify commonly studied CPS models such as continuous-time Markov chains (CTMCs) and probabilistically parametrized Hybrid I/O Automata (P2HIOA). To formally specify hyperproperties, we propose a new temporal logic, hyper probabilistic signal temporal logic (HyperPSTL) that serves as a hyper and probabilistic version of the conventional signal temporal logic (STL). Considering the complexity of real-world systems that can be captured as PUSs, we adopt a statistical model checking (SMC) approach for their verification. We develop a new SMC technique based on the direct computation of significance levels of statistical assertions for HyperPSTL specifications, which requires no a priori knowledge on the indifference margin. Then, we introduce SMC algorithms for HyperPSTL specifications on the joint probabilistic distribution of multiple paths, as well as specifications with nested probabilistic operators quantifying different paths, which cannot be handled by existing SMC algorithms. Finally, we show the effectiveness of our SMC algorithms on CPS benchmarks with varying levels of complexity, including the Toyota Powertrain Control System.) <|cite_end|>, where the authors propose
an SMC algorithm for hyperproperties for cyber-physical
systems using the {\em Clopper-Pearson} (CP) confidence
intervals.
\yw{In this work, we propose another SMC algorithm
for hyperproperties using
{\em sequential probability
ratio tests} (SPRT) <|cite_start|> (Reference: Sequential Tests of Statistical Hypotheses: By a sequential test of a statistical hypothesis is meant any statistical test procedure which gives a specific rule, at any stage of the experiment (at the n-th trial for each integral value of n), for making one of the following three decisions: (1) to accept the hypothesis being tested (null hypothesis), (2) to reject the null hypothesis, (3) to continue the experiment by making an additional observation. Thus, such a test procedure is carried out sequentially. On the basis of the first trial, one of the three decisions mentioned above is made. If the first or the second decision is made, the process is terminated. If the third decision is made, a second trial is performed. Again on the basis of the first two trials one of the three decisions is made and if the third decision is reached a third trial is performed, etc. This process is continued until either the first or the second decision is made. An essential feature of the sequential test, as distinguished from the current test procedure, is that the number of observations required by the sequential test is not predetermined, but is a random variable due to the fact that at any stage of the experiment the decision of terminating the process depends on the results of the observations previously made. The current test procedure may be considered a limiting case of a sequential test in the following sense: For any positive integer n less than some fixed positive integer N, the third decision is always taken at the n-th trial irrespective ofthe results of these first n trials. At the N-th trial either the first or the second decision is taken. Which decision is taken will depend, of course, on the results of the N trials. In a sequential test, as well as in the current test procedure, we may commit two kinds of errors. We may reject the null hypothesis when it is true (error of the first kind), or we may accept the null hypothesis when some alternative) <|cite_end|>,
which are more efficient for statistical
inference than using the confidence
intervals.}
Developing SMC for \hpctls formulas
using SPRT has significant challenges
that do not appear in SMC
for non-hyper probabilistic temporal logics, such as
\pctls.
This is caused by the fact that in \hpctls, one can express complex probabilistic quantification among different paths.
Specifically, \hpctls allows~for:
\begin{itemize}
\item {\bf Probabilistic quantification of multiple paths.} For
example, formula
\begin{equation} \label{eq:ex1}
\P^{(\pv_1, \pv_2)} (\ap^{\pv_1} \U \ap^{\pv_2}) > p
\end{equation}
means that the probability that an atomic proposition $\ap$ holds on a
random path $\pv_1$ until it becomes true on another random path $\pv_2$ is
greater than some $p \in [0,1]$.
\item {\bf Arithmetics of probabilistic quantification.} For example, formula
\begin{equation} \label{eq:ex2}
\P^{\pv_1} (\F \ap^{\pv_1}) + \P^{\pv_2} (\G \ap^{\pv_2}) > p
\end{equation}
stipulates that the sum of the probability that $\ap$ finally holds
and the probability that $\ap$ always holds,
is greater than some $p \geq 0$.
\item {\bf Nested probabilistic quantification.} This is different
from nested probabilistic quantification in \pctls. For example, formula
\begin{equation} \label{eq:ex3}
\P^{\pv_1} \big(\P^{\pv_2} (\ap^{\pv_1} \U \ap^{\pv_2}) > p_1 \big) >
p_2,
\end{equation}
requires that for a (given) path $\pv_1$, the probability that
$(\ap^{\pv_1} \U \ap^{\pv_2})$ holds for a random path $\pv_2$, is greater than
some $p_1 \in [0,1]$; and, this fact should hold with probability greater than
some $p_2 \in [0,1]$ for a random path $\pv_1$.
\end{itemize}
The different kinds of
complex probabilistic quantification among multiple paths
cannot be handled by existing SMC algorithms
for non-hyper probabilistic temporal logics <|cite_start|> (Reference: A survey of statistical model checking: Interactive, distributed, and embedded systems often behave stochastically, for example, when inputs, message delays, or failures conform to a probability distribution. However, reasoning analytically about the behavior of complex stochastic systems is generally infeasible. While simulations of systems are commonly used in engineering practice, they have not traditionally been used to reason about formal specifications. Statistical model checking (SMC) addresses this weakness by using a simulation-based approach to reason about precise properties specified in a stochastic temporal logic. A specification for a communication system may state that within some time bound, the probability that the number of messages in a queue will be greater than 5 must be less than 0.01. Using SMC, executions of a stochastic system are first sampled, after which statistical techniques are applied to determine whether such a property holds. While the output of sample-based methods are not always correct, statistical inference can quantify the confidence in the result produced. In effect, SMC provides a more widely applicable and scalable alternative to analysis of properties of stochastic systems using numerical and symbolic methods. SMC techniques have been successfully applied to analyze systems with large state spaces in areas such as computer networking, security, and systems biology. In this article, we survey SMC algorithms, techniques, and tools, while emphasizing current limitations and tradeoffs between precision and scalability.) <|cite_end|>.
To use SPRT to handle the aforementioned challenges of SMC
requires a condition on the {\em indifference regions}.
As a simple example, to statistically infer if $\pr (A) > p$, for some random event $A$,
using SPRT from sampling, it is required that the probability $\pr(A)$ should
not be too ``close'' to $p$; this means that there exists some known $\varepsilon > 0$
such that
$\pr(A) \notin (p - \varepsilon, p + \varepsilon)$, i.e., $\pr(A) \geq p + \varepsilon$ or $\pr(A) \leq p - \varepsilon$.
This is a common assumption used
for many SMC techniques <|cite_start|> (Reference: On Statistical Model Checking of Stochastic Systems: ) <|cite_end|> <|cite_start|> (Reference: A survey of statistical model checking: Interactive, distributed, and embedded systems often behave stochastically, for example, when inputs, message delays, or failures conform to a probability distribution. However, reasoning analytically about the behavior of complex stochastic systems is generally infeasible. While simulations of systems are commonly used in engineering practice, they have not traditionally been used to reason about formal specifications. Statistical model checking (SMC) addresses this weakness by using a simulation-based approach to reason about precise properties specified in a stochastic temporal logic. A specification for a communication system may state that within some time bound, the probability that the number of messages in a queue will be greater than 5 must be less than 0.01. Using SMC, executions of a stochastic system are first sampled, after which statistical techniques are applied to determine whether such a property holds. While the output of sample-based methods are not always correct, statistical inference can quantify the confidence in the result produced. In effect, SMC provides a more widely applicable and scalable alternative to analysis of properties of stochastic systems using numerical and symbolic methods. SMC techniques have been successfully applied to analyze systems with large state spaces in areas such as computer networking, security, and systems biology. In this article, we survey SMC algorithms, techniques, and tools, while emphasizing current limitations and tradeoffs between precision and scalability.) <|cite_end|>.
Therefore, it is sufficient to test between the two most indistinguishable cases
$\pr(A) \notin p - \varepsilon$
and
$\pr(A) \notin p + \varepsilon$.
The interval $(p - \varepsilon, p + \varepsilon)$
is usually referred to as the \emph{indifference~region}.
\yw{In this work, we propose new conditions
on the {\em indifference regions}
that enable the use of SPRT in the SMC of \hpctls.}
For the SMC of arithmetics of
probabilistic quantifications in~\eqref{eq:ex2},
we consider the hypothesis testing problem:
\begin{equation} \label{eq:into_ht}
\begin{split}
& H_0: \big( \P^{\pv_1} (\F \ap^{\pv_1}), \P^{\pv_2} (\G
\ap^{\pv_2}) \big) \in D,
\\ & H_1: \big( \P^{\pv_1} (\F \ap^{\pv_1}), \P^{\pv_2} (\G
\ap^{\pv_2}) \big) \in D^\mathrm{c},
\end{split}
\end{equation}
where $D = \{(p_1, p_2) \in [0,1]^2 \mid p_1 + p_2 > p \}$
and $D^\mathrm{c}$ is its complement set.
To handle the joint probability $\big( \P^{\pv_1} (\F \ap^{\pv_1}),
\P^{\pv_2} (\G \ap^{\pv_2}) \big)$ in \eqref{eq:into_ht},
we propose a novel {\em multi-dimensional} extension of the standard SPRT.
Specifically, we first generalize the notion of the indifference
region (namely, the parameter $\varepsilon$) to a multi-dimensional case.
This new notion of indifference region ensures that
our multi-dimensional SPRT algorithm provides provable probabilistic guarantees
for any desired false positive $\FP \in (0,1)$ and false negative $\FN \in
(0,1)$ ratios.
\yw{Then we note that the hypotheses $H_0$ and $H_1$
in \eqref{eq:into_ht} are composite,
which contains infinitely many simple hypotheses.
To use SPRT, which mainly deal with simple hypotheses,
on the two composite hypotheses, we propose a geometric condition
to identify the two most indistinguishable simple hypotheses from
$H_0$ and $H_1$, respectively.
We show that if the SPRT can distinguish these two simple hypotheses,
then any two simple hypotheses from $H_0$ and $H_1$ can be distinguished
by the same test.}
For the SMC of probabilistic quantification of multiple paths
in~\eqref{eq:ex1},
we note that the SMC of
probabilistic quantification of multiple parallel paths can be handled by
generalizing the common SPRT to tuples of samples.
For the SMC of nested probabilistic quantification
in~\eqref{eq:ex3},
we can perform a compositional analysis
for the probabilistic error in the SMC of the sub-formulas, to yield the global
false positive and false negative ratios, in the same way as <|cite_start|> (Reference: Statistical verification of hyperproperties for cyber-physical systems: Many important properties of cyber-physical systems (CPS) are defined upon the relationship between multiple executions simultaneously in continuous time. Examples include probabilistic fairness and sensitivity to modeling errors (i.e., parameters changes) for real-valued signals. These requirements can only be specified by hyperproperties. In this article, we focus on verifying probabilistic hyperproperties for CPS. To cover a wide range of modeling formalisms, we first propose a general model of probabilistic uncertain systems (PUSs) that unify commonly studied CPS models such as continuous-time Markov chains (CTMCs) and probabilistically parametrized Hybrid I/O Automata (P2HIOA). To formally specify hyperproperties, we propose a new temporal logic, hyper probabilistic signal temporal logic (HyperPSTL) that serves as a hyper and probabilistic version of the conventional signal temporal logic (STL). Considering the complexity of real-world systems that can be captured as PUSs, we adopt a statistical model checking (SMC) approach for their verification. We develop a new SMC technique based on the direct computation of significance levels of statistical assertions for HyperPSTL specifications, which requires no a priori knowledge on the indifference margin. Then, we introduce SMC algorithms for HyperPSTL specifications on the joint probabilistic distribution of multiple paths, as well as specifications with nested probabilistic operators quantifying different paths, which cannot be handled by existing SMC algorithms. Finally, we show the effectiveness of our SMC algorithms on CPS benchmarks with varying levels of complexity, including the Toyota Powertrain Control System.) <|cite_end|>.
Finally, based on the above new statistical inference algorithms,
we design SMC algorithms for \hpctls.
These algorithms are fully implemented and evaluated by four prominent case studies.
\footnote{The simulation code is available
at.}
Specifically, we apply our SMC algorithms to analyze:
(i)~the time side-channel vulnerability in encryption <|cite_start|> (Reference: Precise detection of side-channel vulnerabilities using quantitative cartesian Hoare logic: This paper presents Themis, an end-to-end static analysis tool for finding resource-usage side-channel vulnerabilities in Java applications. We introduce the notion of epsilon-bounded non-interference, a variant and relaxation of Goguen and Meseguer's well-known non-interference principle. We then present Quantitative Cartesian Hoare Logic (QCHL), a program logic for verifying epsilon-bounded non-interference. Our tool, Themis, combines automated reasoning in CHL with lightweight static taint analysis to improve scalability. We evaluate Themis on well known Java applications and demonstrate that Themis can find unknown side-channel vulnerabilities in widely-used programs. We also show that Themis can verify the absence of vulnerabilities in repaired versions of vulnerable programs and that Themis compares favorably against Blazer, a state-of-the-art static analysis tool for finding timing side channels in Java applications.) <|cite_end|> <|cite_start|> (Reference: Data-Driven Debugging for Functional Side Channels: Information leaks through side channels are a pervasive problem, even in security-critical applications. Functional side channels arise when an attacker knows that a secret value of a server stays fixed for a certain time. Then, the attacker can observe the server executions on a sequence of different public inputs, each paired with the same secret input. Thus for each secret, the attacker observes a function from public inputs to execution time, for instance, and she can compare these functions for different secrets. First, we introduce a notion of noninterference for functional side channels. We focus on the case of noisy observations, where we demonstrate with examples that there is a practical functional side channel in programs that would be deemed information-leak-free or be underestimated using the standard definition. Second, we develop a framework and techniques for debugging programs for functional side channels. We extend evolutionary fuzzing techniques to generate inputs that exploit functional dependencies of response times on public inputs. We adapt existing results and algorithms in functional data analysis to model the functions and discover the existence of side channels. We use a functional extension of standard decision tree learning to pinpoint the code fragments causing a side channel if there is one. We empirically evaluate the performance of our tool FUCHSIA on a series of micro-benchmarks and realistic Java programs. On the set of benchmarks, we show that FUCHSIA outperforms the state-of-the-art techniques in detecting side channel classes. On the realistic programs, we show the scalability of FUCHSIA in analyzing functional side channels in Java programs with thousands of methods. Also, we show the usefulness of FUCHSIA in finding side channels including a zero-day vulnerability in OpenJDK and another vulnerability in Jetty that was since fixed by the developers.) <|cite_end|>, (ii)~probabilistic anonymity in dining cryptographers <|cite_start|> (Reference: The dining cryptographers problem: Unconditional sender and recipient untraceability: ) <|cite_end|>, (iii)~probabilistic noninterference of parallel programs <|cite_start|> (Reference: Security policies and security models: We assune that the reader is familiar with the ubiquity of information in the modern world and is sympathetic with the need for restricting rights to read, add, modify, or delete information in specific contexts. This need is particularly acute for systems having computers as significant components.) <|cite_end|>,
and (iv)~the performance of a random cache replacement
policy <|cite_start|> (Reference: Security Analysis of Cache Replacement Policies: Modern computer architectures share physical resources between different programs in order to increase area-, energy-, and cost-efficiency. Unfortunately, sharing often gives rise to side channels that can be exploited for extracting or transmitting sensitive information. We currently lack techniques for systematic reasoning about this interplay between security and efficiency. In particular, there is no established way for quantifying security properties of shared caches. In this paper, we propose a novel model that enables us to characterize important security properties of caches. Our model encompasses two aspects: (1) The amount of information that can be absorbed by a cache, and (2) the amount of information that can effectively be extracted from the cache by an adversary. We use our model to compute both quantities for common cache replacement policies (FIFO, LRU, and PLRU) and to compare their isolation properties. We further show how our model for information extraction leads to an algorithm that can be used to improve the bounds delivered by the CacheAudit static analyzer.) <|cite_end|> that defends against cache flush attacks.
Our results show that the proposed SMC algorithms
provide the correct answer with high confidence levels
in all cases while requiring very short analysis times.
\paragraph{Organization} The rest of the paper is organized as follows.
We introduce \hpctls
in~\cref{sec:hpctls}.
The expressiveness of \hpctls is
discussed in \cref{sec:express}, before illustrating its application
in~\cref{sec:applications}. Our SMC algorithms~for \hpctls are introduced
in~\cref{sec:non-nested}. We present our case studies and experimental
results in~\cref{sec:simulation}. Related work is discussed
in~\cref{sec:related}, before concluding remarks
in~Section~\ref{sec:conclusion}.
Related Work
\label{sec:related}
To the best of our knowledge, the only existing SMC algorithm for hyper temporal logics is the one proposed in <|cite_start|> (Reference: Statistical verification of hyperproperties for cyber-physical systems: Many important properties of cyber-physical systems (CPS) are defined upon the relationship between multiple executions simultaneously in continuous time. Examples include probabilistic fairness and sensitivity to modeling errors (i.e., parameters changes) for real-valued signals. These requirements can only be specified by hyperproperties. In this article, we focus on verifying probabilistic hyperproperties for CPS. To cover a wide range of modeling formalisms, we first propose a general model of probabilistic uncertain systems (PUSs) that unify commonly studied CPS models such as continuous-time Markov chains (CTMCs) and probabilistically parametrized Hybrid I/O Automata (P2HIOA). To formally specify hyperproperties, we propose a new temporal logic, hyper probabilistic signal temporal logic (HyperPSTL) that serves as a hyper and probabilistic version of the conventional signal temporal logic (STL). Considering the complexity of real-world systems that can be captured as PUSs, we adopt a statistical model checking (SMC) approach for their verification. We develop a new SMC technique based on the direct computation of significance levels of statistical assertions for HyperPSTL specifications, which requires no a priori knowledge on the indifference margin. Then, we introduce SMC algorithms for HyperPSTL specifications on the joint probabilistic distribution of multiple paths, as well as specifications with nested probabilistic operators quantifying different paths, which cannot be handled by existing SMC algorithms. Finally, we show the effectiveness of our SMC algorithms on CPS benchmarks with varying levels of complexity, including the Toyota Powertrain Control System.) <|cite_end|>.
It handles complex probabilistic quantifications
similar to \hpctls but using a multi-dimensional extension of
Clopper-Pearson confidence interval, whereas, in this paper, our focus is on
SPRT. Moreover, the application domain of <|cite_start|> (Reference: Statistical verification of hyperproperties for cyber-physical systems: Many important properties of cyber-physical systems (CPS) are defined upon the relationship between multiple executions simultaneously in continuous time. Examples include probabilistic fairness and sensitivity to modeling errors (i.e., parameters changes) for real-valued signals. These requirements can only be specified by hyperproperties. In this article, we focus on verifying probabilistic hyperproperties for CPS. To cover a wide range of modeling formalisms, we first propose a general model of probabilistic uncertain systems (PUSs) that unify commonly studied CPS models such as continuous-time Markov chains (CTMCs) and probabilistically parametrized Hybrid I/O Automata (P2HIOA). To formally specify hyperproperties, we propose a new temporal logic, hyper probabilistic signal temporal logic (HyperPSTL) that serves as a hyper and probabilistic version of the conventional signal temporal logic (STL). Considering the complexity of real-world systems that can be captured as PUSs, we adopt a statistical model checking (SMC) approach for their verification. We develop a new SMC technique based on the direct computation of significance levels of statistical assertions for HyperPSTL specifications, which requires no a priori knowledge on the indifference margin. Then, we introduce SMC algorithms for HyperPSTL specifications on the joint probabilistic distribution of multiple paths, as well as specifications with nested probabilistic operators quantifying different paths, which cannot be handled by existing SMC algorithms. Finally, we show the effectiveness of our SMC algorithms on CPS benchmarks with varying levels of complexity, including the Toyota Powertrain Control System.) <|cite_end|> is on timed
hyperproperties and cyber-physical systems, whereas, here, we
concentrate on applications in information-flow security.
This algorithm provides provable probabilistic guarantees
for any desired false positive $\FP \in (0,1)$
(the probability of wrongly claiming a false formula to be true)
and false negative $\FN \in (0,1)$
(the probability of wrongly claiming a true formula to be false).
Randomization is used in different contexts to quantify the amount
of information leak as well as to provide probabilistic guarantees about the correctness of security policies. A classic example is probabilistic
noninterference <|cite_start|> (Reference: Probabilistic Interference: D. McCullough's (1988) state machine formulism and definition of restrictiveness are restated. An example system is presented which illustrates the problem of probabilistic interference. An extension to McCullough's work that solves the problem of probabilistic interference is developed. A series of examples are presented which are designed to show the application of this extension. An example which is a novel solution to the so-called secure readers-writers problem is also presented.<<ETX>>) <|cite_end|> <|cite_start|> (Reference: Toward a Mathematical Foundation for Information Flow Security: A general-purpose, probabilistic state machine model which can be used to model a large class of nondeterministic (as well as deterministic) computer systems is described. The necessary probability theory to rigorously state and prove probabilistic properties of modeled systems is developed. A definition of information flow-security making use of this formalism is given. Intuitively, information flow security is the aspect of computer security concerned with how information is permitted to flow through a computer system. It is proved that the proposed definition of information flow security implies an information-theoretic definition. Finally, the author gives a verification condition for information flow security and proves that it implies the proposed definition of information flow security.<<ETX>>) <|cite_end|>, which requires that high-security input
should not change the probability of reaching low-security
outputs. There has been extensive work in this area including using probabilistic bisimulation to reason about probabilistic noninterference in multi-threaded programs <|cite_start|> (Reference: Probabilistic noninterference for multi-threaded programs: We present a probability-sensitive confidentiality specification-a form of probabilistic noninterference-for a small multi-threaded programming language with dynamic thread creation. Probabilistic covert channels arise from a scheduler which is probabilistic. Since scheduling policy is typically outside the language specification for multi-threaded languages, we describe how to generalise the security condition in order to define how to generalise the security condition in order to define robust security with respect to a wide class of schedulers, not excluding the possibility of deterministic (e.g., round-robin) schedulers and program-controlled thread priorities. The formulation is based on an adaptation of Larsen and Skou's (1991) notion of probabilistic bisimulation. We show how the security condition satisfies compositionality properties which facilitate straightforward proofs of correctness for, e.g., security type systems. We illustrate this by defining a security type system which improves on previous multi-threaded systems, and by proving it correct with respect to our stronger scheduler-independent security condition.) <|cite_end|>. Another prominent line of work is {\em quantitative
information flow} <|cite_start|> (Reference: On the Foundations of Quantitative Information Flow: ) <|cite_end|> <|cite_start|> (Reference: An Information-theoretic Model for Adaptive Side-channel Attacks: We present a model of adaptive side-channel attacks which we combine with information-theoretic metrics to quantify the information revealed to an attacker. This allows us to express an attacker's remaining uncertainty about a secret as a function of the number of side-channel measurements made. We present algorithms and approximation techniques for computing this measure. We also give examples of how they can be used to analyze the resistance of hardware implementations of cryptographic functions to both timing and power attacks.) <|cite_end|>, which relates information theory to
independent executions of a system and uses different notions of entropy to
quantify the amount information leaked across different executions.
Recently, there has been significant progress in automatically
{verifying} <|cite_start|> (Reference: Algorithms for Model Checking HyperLTL and HyperCTL ^*: ) <|cite_end|> <|cite_start|> (Reference: Verifying Security Policies in Multi-agent Workflows with Loops: We consider the automatic verification of information flow security policies of web-based workflows, such as conference submission systems like EasyChair. Our workflow description language allows for loops, non-deterministic choice, and an unbounded number of participating agents. The information flow policies are specified in a temporal logic for hyperproperties. We show that the verification problem can be reduced to the satisfiability of a formula of first-order linear-time temporal logic, and provide decidability results for relevant classes of workflows and specifications. We report on experimental results obtained with an implementation of our approach on a series of benchmarks.) <|cite_end|> <|cite_start|> (Reference: Model Checking Quantitative Hyperproperties: Hyperproperties are properties of sets of computation traces. In this paper, we study quantitative hyperproperties, which we define as hyperproperties that express a bound on the number of traces that may appear in a certain relation. For example, quantitative non-interference limits the amount of information about certain secret inputs that is leaked through the observable outputs of a system. Quantitative non-interference thus bounds the number of traces that have the same observable input but different observable output. We study quantitative hyperproperties in the setting of HyperLTL, a temporal logic for hyperproperties. We show that, while quantitative hyperproperties can be expressed in HyperLTL, the running time of the HyperLTL model checking algorithm is, depending on the type of property, exponential or even doubly exponential in the quantitative bound. We improve this complexity with a new model checking algorithm based on model-counting. The new algorithm needs only logarithmic space in the bound and therefore improves, depending on the property, exponentially or even doubly exponentially over the model checking algorithm of HyperLTL. In the worst case, the new algorithm needs polynomial space in the size of the system. Our Max#Sat-based prototype implementation demonstrates, however, that the counting approach is viable on systems with nontrivial quantitative information flow requirements such as a passcode checker.) <|cite_end|> <|cite_start|> (Reference: Verifying Hyperliveness: HyperLTL is an extension of linear-time temporal logic for the specification of hyperproperties, i.e., temporal properties that relate multiple computation traces. HyperLTL can express information flow policies as well as properties like symmetry in mutual exclusion algorithms or Hamming distances in error-resistant transmission protocols. Previous work on HyperLTL model checking has focussed on the alternation-free fragment of HyperLTL, where verification reduces to checking a standard trace property over an appropriate self-composition of the system. The alternation-free fragment does, however, not cover general hyperliveness properties. Universal formulas, for example, cannot express the secrecy requirement that for every possible value of a secret variable there exists a computation where the value is different while the observations made by the external observer are the same. In this paper, we study the more difficult case of hyperliveness properties expressed as HyperLTL formulas with quantifier alternation. We reduce existential quantification to strategic choice and show that synthesis algorithms can be used to eliminate the existential quantifiers automatically. We furthermore show that this approach can be extended to reactive system synthesis, i.e., to automatically construct a reactive system that is guaranteed to satisfy a given HyperLTL formula.) <|cite_end|>
and {monitoring} <|cite_start|> (Reference: Runtime verification of k-safety hyperproperties in hyperltl: This paper introduces a novel runtime verification technique for a rich sub-class of Clarkson and Schneider's hyperproperties. The primary application of such properties is in expressing security policies (e.g., information flow) that cannot be expressed in trace-based specification languages (e.g., LTL). First, to incorporate syntactic means, we draw connections between safety and co-safety hyperproperties and the temporal logic HYPERLTL, which allows explicit quantification over multiple executions. We also define the notion of monitorability in HYPERLTL and identify classes of monitorable HYPERLTL formulas. Then, we introduce an algorithm for monitoring k-safety and co-k-safety hyperproperties expressed in HYPERLTL. Our technique is based on runtime formula progression as well as on-the-fly monitor synthesis across multiple executions. We analyze different performance aspects of our technique by conducting thorough experiments on monitoring security policies for information flow and observational determinism on a real-world location-based service dataset as well as synthetic trace sets.) <|cite_end|> <|cite_start|> (Reference: Monitoring Hyperproperties: Hyperproperties, such as non-interference and observational determinism, relate multiple system executions to each other. They are not expressible in standard temporal logics, like LTL, CTL, and CTL*, and thus cannot be monitored with standard runtime verification techniques. HyperLTL extends linear-time temporal logic (LTL) with explicit quantification over traces in order to express Hyperproperties. We investigate the runtime verification problem of HyperLTL formulas for three different input models: (1) The parallel model, where a fixed number of system executions is processed in parallel. (2) The unbounded sequential model, where system executions are processed sequentially, one execution at a time. In this model, the number of incoming executions may grow forever. (3) The bounded sequential model where the traces are processed sequentially and the number of incoming executions is bounded. We show that deciding monitorability of HyperLTL formulas is PSPACE-complete for input models (1) and (3). Deciding monitorability is PSPACE-complete for alternation-free HyperLTL formulas in input model (2). For every input model, we provide practical monitoring algorithms. We also present various optimization techniques. By recognizing properties of specifications such as reflexivity, symmetry, and transitivity, we reduce the number of comparisons between traces. For the sequential models, we present a technique that minimized the number of traces that need to be stored. Finally, we provide an optimization that succinctly represents the stored traces by sharing common prefixes. We evaluate our optimizations, showing that this leads to much more scalable monitoring, in particular, significantly lower memory consumption.) <|cite_end|> <|cite_start|> (Reference: Rewriting-Based Runtime Verification for Alternation-Free HyperLTL: ) <|cite_end|> <|cite_start|> (Reference: Monitoring Hyperproperties by Combining Static Analysis and Runtime Verification: ) <|cite_end|> <|cite_start|> (Reference: RVHyper: A Runtime Verification Tool for Temporal Hyperproperties: We present RVHyper, a runtime verification tool for hyperproperties. Hyperproperties, such as non-interference and observational determinism, relate multiple computation traces with each other. Specifications are given as formulas in the temporal logic HyperLTL, which extends linear-time temporal logic (LTL) with trace quantifiers and trace variables. RVHyper processes execution traces sequentially until a violation of the specification is detected. In this case, a counter example, in the form of a set of traces, is returned. As an example application, we show how RVHyper can be used to detect spurious dependencies in hardware designs.) <|cite_end|> <|cite_start|> (Reference: Constraint-Based Monitoring of Hyperproperties: Verifying hyperproperties at runtime is a challenging problem as hyperproperties, such as non-interference and observational determinism, relate multiple computation traces with each other. It is necessary to store previously seen traces, because every new incoming trace needs to be compatible with every run of the system observed so far. Furthermore, the new incoming trace poses requirements on future traces. In our monitoring approach, we focus on those requirements by rewriting a hyperproperty in the temporal logic HyperLTL to a Boolean constraint system. A hyperproperty is then violated by multiple runs of the system if the constraint system becomes unsatisfiable. We compare our implementation, which utilizes either BDDs or a SAT solver to store and evaluate constraints, to the automata-based monitoring tool RVHyper.) <|cite_end|>
\hltl specifications. \hltl is also supported by a growing set of
tools, including the model checker MCHyper <|cite_start|> (Reference: Algorithms for Model Checking HyperLTL and HyperCTL ^*: ) <|cite_end|> <|cite_start|> (Reference: Verifying Hyperliveness: HyperLTL is an extension of linear-time temporal logic for the specification of hyperproperties, i.e., temporal properties that relate multiple computation traces. HyperLTL can express information flow policies as well as properties like symmetry in mutual exclusion algorithms or Hamming distances in error-resistant transmission protocols. Previous work on HyperLTL model checking has focussed on the alternation-free fragment of HyperLTL, where verification reduces to checking a standard trace property over an appropriate self-composition of the system. The alternation-free fragment does, however, not cover general hyperliveness properties. Universal formulas, for example, cannot express the secrecy requirement that for every possible value of a secret variable there exists a computation where the value is different while the observations made by the external observer are the same. In this paper, we study the more difficult case of hyperliveness properties expressed as HyperLTL formulas with quantifier alternation. We reduce existential quantification to strategic choice and show that synthesis algorithms can be used to eliminate the existential quantifiers automatically. We furthermore show that this approach can be extended to reactive system synthesis, i.e., to automatically construct a reactive system that is guaranteed to satisfy a given HyperLTL formula.) <|cite_end|>, the
satisfiability checkers EAHyper <|cite_start|> (Reference: EAHyper: Satisfiability, Implication, and Equivalence Checking of Hyperproperties: ) <|cite_end|> and MGHyper <|cite_start|> (Reference: MGHyper: Checking Satisfiability of HyperLTL Formulas Beyond the \exists ^*\forall ^* ∃ ∗ ∀ ∗ Fragment: ) <|cite_end|>, and the runtime monitoring tool RVHyper <|cite_start|> (Reference: RVHyper: A Runtime Verification Tool for Temporal Hyperproperties: We present RVHyper, a runtime verification tool for hyperproperties. Hyperproperties, such as non-interference and observational determinism, relate multiple computation traces with each other. Specifications are given as formulas in the temporal logic HyperLTL, which extends linear-time temporal logic (LTL) with trace quantifiers and trace variables. RVHyper processes execution traces sequentially until a violation of the specification is detected. In this case, a counter example, in the form of a set of traces, is returned. As an example application, we show how RVHyper can be used to detect spurious dependencies in hardware designs.) <|cite_end|>. Synthesis techniques for
\hltl are studied in <|cite_start|> (Reference: Synthesizing Reactive Systems from Hyperproperties: We study the reactive synthesis problem for hyperproperties given as formulas of the temporal logic HyperLTL. Hyperproperties generalize trace properties, i.e., sets of traces, to sets of sets of traces. Typical examples are information-flow policies like noninterference, which stipulate that no sensitive data must leak into the public domain. Such properties cannot be expressed in standard linear or branching-time temporal logics like LTL, CTL, or CTL$^*$. We show that, while the synthesis problem is undecidable for full HyperLTL, it remains decidable for the $\exists^*$, $\exists^*\forall^1$, and the $\mathit{linear}\;\forall^*$ fragments. Beyond these fragments, the synthesis problem immediately becomes undecidable. For universal HyperLTL, we present a semi-decision procedure that constructs implementations and counterexamples up to a given bound. We report encouraging experimental results obtained with a prototype implementation on example specifications with hyperproperties like symmetric responses, secrecy, and information-flow.) <|cite_end|> and in <|cite_start|> (Reference: Program Repair for Hyperproperties: We study the repair problem for hyperproperties specified in the temporal logic HyperLTL. Hyperproperties are system properties that relate multiple computation traces. This class of properties includes information flow policies like noninterference and observational determinism. The repair problem is to find, for a given Kripke structure, a substructure that satisfies a given specification. We show that the repair problem is decidable for HyperLTL specifications and finite-state Kripke structures. We provide a detailed complexity analysis for different fragments of HyperLTL and different system types: tree-shaped, acyclic, and general Kripke structures.) <|cite_end|>. <|paper_end|> | [
"<|reference_start|> Statistical verification of hyperproperties for cyber-physical systems: Many important properties of cyber-physical systems (CPS) are defined upon the relationship between multiple executions simultaneously in continuous time. Examples include probabilistic fairness and sensitivity to modeling errors (i.e., parameters changes) for real-valued signals. These requirements can only be specified by hyperproperties. In this article, we focus on verifying probabilistic hyperproperties for CPS. To cover a wide range of modeling formalisms, we first propose a general model of probabilistic uncertain systems (PUSs) that unify commonly studied CPS models such as continuous-time Markov chains (CTMCs) and probabilistically parametrized Hybrid I/O Automata (P2HIOA). To formally specify hyperproperties, we propose a new temporal logic, hyper probabilistic signal temporal logic (HyperPSTL) that serves as a hyper and probabilistic version of the conventional signal temporal logic (STL). Considering the complexity of real-world systems that can be captured as PUSs, we adopt a statistical model checking (SMC) approach for their verification. We develop a new SMC technique based on the direct computation of significance levels of statistical assertions for HyperPSTL specifications, which requires no a priori knowledge on the indifference margin. Then, we introduce SMC algorithms for HyperPSTL specifications on the joint probabilistic distribution of multiple paths, as well as specifications with nested probabilistic operators quantifying different paths, which cannot be handled by existing SMC algorithms. Finally, we show the effectiveness of our SMC algorithms on CPS benchmarks with varying levels of complexity, including the Toyota Powertrain Control System. <|reference_end|>",
"<|reference_start|> Monitoring Hyperproperties: Hyperproperties, such as non-interference and observational determinism, relate multiple system executions to each other. They are not expressible in standard temporal logics, like LTL, CTL, and CTL*, and thus cannot be monitored with standard runtime verification techniques. HyperLTL extends linear-time temporal logic (LTL) with explicit quantification over traces in order to express Hyperproperties. We investigate the runtime verification problem of HyperLTL formulas for three different input models: (1) The parallel model, where a fixed number of system executions is processed in parallel. (2) The unbounded sequential model, where system executions are processed sequentially, one execution at a time. In this model, the number of incoming executions may grow forever. (3) The bounded sequential model where the traces are processed sequentially and the number of incoming executions is bounded. We show that deciding monitorability of HyperLTL formulas is PSPACE-complete for input models (1) and (3). Deciding monitorability is PSPACE-complete for alternation-free HyperLTL formulas in input model (2). For every input model, we provide practical monitoring algorithms. We also present various optimization techniques. By recognizing properties of specifications such as reflexivity, symmetry, and transitivity, we reduce the number of comparisons between traces. For the sequential models, we present a technique that minimized the number of traces that need to be stored. Finally, we provide an optimization that succinctly represents the stored traces by sharing common prefixes. We evaluate our optimizations, showing that this leads to much more scalable monitoring, in particular, significantly lower memory consumption. <|reference_end|>",
"<|reference_start|> EAHyper: Satisfiability, Implication, and Equivalence Checking of Hyperproperties: <|reference_end|>",
"<|reference_start|> RVHyper: A Runtime Verification Tool for Temporal Hyperproperties: We present RVHyper, a runtime verification tool for hyperproperties. Hyperproperties, such as non-interference and observational determinism, relate multiple computation traces with each other. Specifications are given as formulas in the temporal logic HyperLTL, which extends linear-time temporal logic (LTL) with trace quantifiers and trace variables. RVHyper processes execution traces sequentially until a violation of the specification is detected. In this case, a counter example, in the form of a set of traces, is returned. As an example application, we show how RVHyper can be used to detect spurious dependencies in hardware designs. <|reference_end|>"
] | [
39,
50,
57,
59
] | {"<|multi_cite_1_1|>": "ss-1036420", "<|multi_cite_1_2|>": "ss-746060", "<|cite_2|>": "ss-698237", "<|cite_3|>": "ss-767290", "<|multi_cite_4_1|>": "ss-1509999", "<|multi_cite_4_2|>": "arxiv-153911", "<|cite_5|>": "ss-854152", "<|cite_6|>": "ss-767969", "<|cite_7|>": "arxiv-116567", "<|cite_8|>": "arxiv-153911", "<|cite_9|>": "arxiv-153911", "<|multi_cite_10_1|>": "arxiv-153911", "<|multi_cite_10_2|>": "ss-1958982", "<|multi_cite_10_3|>": "arxiv-55540", "<|multi_cite_10_4|>": "arxiv-315987", "<|multi_cite_11_1|>": "ss-1519061", "<|multi_cite_11_2|>": "ss-950390", "<|multi_cite_11_3|>": "ss-678134", "<|cite_12|>": "ss-854152", "<|cite_13|>": "arxiv-153911", "<|multi_cite_14_1|>": "ss-950390", "<|multi_cite_14_2|>": "arxiv-60722", "<|multi_cite_14_3|>": "ss-970102", "<|multi_cite_14_4|>": "ss-2151307", "<|multi_cite_14_5|>": "ss-1293660", "<|multi_cite_15_1|>": "ss-1535897", "<|multi_cite_15_2|>": "ss-1519061", "<|cite_16|>": "ss-1293661", "<|cite_17|>": "ss-762632", "<|cite_18|>": "ss-1519061", "<|multi_cite_19_1|>": "ss-1535897", "<|multi_cite_19_2|>": "ss-1519061", "<|cite_20|>": "ss-1293661", "<|multi_cite_22_1|>": "ss-1466333", "<|multi_cite_22_2|>": "arxiv-170838", "<|cite_23|>": "ss-774883", "<|cite_24|>": "ss-1343940", "<|cite_25|>": "arxiv-114950", "<|cite_26|>": "ss-1293661", "<|cite_27|>": "ss-1293661", "<|multi_cite_28_1|>": "ss-1355102", "<|multi_cite_28_2|>": "ss-698237", "<|cite_29|>": "ss-1020605", "<|multi_cite_30_1|>": "ss-746060", "<|multi_cite_30_2|>": "ss-1036420", "<|multi_cite_31_1|>": "ss-849007", "<|multi_cite_31_2|>": "arxiv-133115", "<|multi_cite_31_3|>": "arxiv-207170", "<|multi_cite_31_4|>": "arxiv-265628", "<|multi_cite_32_1|>": "ss-1958982", "<|multi_cite_32_2|>": "arxiv-164497", "<|multi_cite_32_3|>": "ss-914763", "<|multi_cite_32_4|>": "ss-1118681", "<|multi_cite_32_5|>": "arxiv-207628", "<|multi_cite_32_7|>": "arxiv-207172", "<|multi_cite_33_1|>": "ss-849007", "<|multi_cite_33_2|>": "arxiv-265628", "<|cite_34|>": "ss-1732144", "<|cite_35|>": "ss-1398152", "<|cite_36|>": "arxiv-207628", "<|cite_37|>": "arxiv-207169", "<|cite_38|>": "arxiv-316157"} |
1405.3866 | <|paper_start|> Title: Speeding up Convolutional Neural Networks with Low Rank Expansions
Abstract: Speeding up Convolutional Neural Networks with Low Rank Expansions: The focus of this paper is speeding up the evaluation of convolutional neural networks. While delivering impressive results across a range of computer vision and machine learning tasks, these networks are computationally demanding, limiting their deployability. Convolutional layers generally consume the bulk of the processing time, and so in this work we present two simple schemes for drastically speeding up these layers. This is achieved by exploiting cross-channel or filter redundancy to construct a low rank basis of filters that are rank-1 in the spatial domain. Our methods are architecture agnostic, and can be easily applied to existing CPU and GPU convolutional frameworks for tuneable speedup performance. We demonstrate this with a real world network designed for scene text character recognition, showing a possible 2.5x speedup with no loss in accuracy, and 4.5x speedup with less than 1% drop in accuracy, still achieving state-of-the-art on standard benchmarks.
Introduction
\label{sec:intro}
Many applications of machine learning, and most recently computer vision, have been disrupted by the use of convolutional neural networks (CNNs). The combination of an end-to-end learning system with minimal need for human design decisions, and the ability to efficiently train large and complex models, have allowed them to achieve state-of-the-art performance in a number of benchmarks <|cite_start|> (Reference: ImageNet classification with deep convolutional neural
networks: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.) <|cite_end|> <|cite_start|> (Reference: DeepFace: Closing the Gap to Human-Level Performance in Face Verification: In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4, 000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27%, closely approaching human-level performance.) <|cite_end|> <|cite_start|> (Reference: DeepPose: Human Pose Estimation via Deep Neural Networks: We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regressors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formulation which capitalizes on recent advances in Deep Learning. We present a detailed empirical analysis with state-of-art or better performance on four academic benchmarks of diverse real-world images.) <|cite_end|> <|cite_start|> (Reference: End-to-End Text Recognition with Hybrid HMM Maxout Models: The problem of detecting and recognizing text in natural scenes has proved to be more challenging than its counterpart in documents, with most of the previous work focusing on a single part of the problem. In this work, we propose new solutions to the character and word recognition problems and then show how to combine these solutions in an end-to-end text-recognition system. We do so by leveraging the recently introduced Maxout networks along with hybrid HMM models that have proven useful for voice recognition. Using these elements, we build a tunable and highly accurate recognition system that beats state-of-the-art results on all the sub-problems for both the ICDAR 2003 and SVT benchmark datasets.) <|cite_end|> <|cite_start|> (Reference: Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks: Recognizing arbitrary multi-character text in unconstrained natural photographs is a hard problem. In this paper, we address an equally hard sub-problem in this domain viz. recognizing arbitrary multi-digit numbers from Street View imagery. Traditional approaches to solve this problem typically separate out the localization, segmentation, and recognition steps. In this paper we propose a unified approach that integrates these three steps via the use of a deep convolutional neural network that operates directly on the image pixels. We employ the DistBelief implementation of deep neural networks in order to train large, distributed neural networks on high quality images. We find that the performance of this approach increases with the depth of the convolutional network, with the best performance occurring in the deepest architecture we trained, with eleven hidden layers. We evaluate this approach on the publicly available SVHN dataset and achieve over $96\%$ accuracy in recognizing complete street numbers. We show that on a per-digit recognition task, we improve upon the state-of-the-art, achieving $97.84\%$ accuracy. We also evaluate this approach on an even more challenging dataset generated from Street View imagery containing several tens of millions of street number annotations and achieve over $90\%$ accuracy. To further explore the applicability of the proposed system to broader text recognition tasks, we apply it to synthetic distorted text from reCAPTCHA. reCAPTCHA is one of the most secure reverse turing tests that uses distorted text to distinguish humans from bots. We report a $99.8\%$ accuracy on the hardest category of reCAPTCHA. Our evaluations on both tasks indicate that at specific operating thresholds, the performance of the proposed system is comparable to, and in some cases exceeds, that of human operators.) <|cite_end|> <|cite_start|> (Reference: OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.) <|cite_end|> <|cite_start|> (Reference: CNN Features off-the-shelf: an Astounding Baseline for Recognition: Recent results indicate that the generic descriptors extracted from the convolutional neural networks are very powerful. This paper adds to the mounting evidence that this is indeed the case. We report on a series of experiments conducted for different recognition tasks using the publicly available code and model of the \overfeat network which was trained to perform object classification on ILSVRC13. We use features extracted from the \overfeat network as a generic image representation to tackle the diverse range of recognition tasks of object image classification, scene recognition, fine grained recognition, attribute detection and image retrieval applied to a diverse set of datasets. We selected these tasks and datasets as they gradually move further away from the original task and data the \overfeat network was trained to solve. Astonishingly, we report consistent superior results compared to the highly tuned state-of-the-art systems in all the visual classification tasks on various datasets. For instance retrieval it consistently outperforms low memory footprint methods except for sculptures dataset. The results are achieved using a linear SVM classifier (or $L2$ distance in case of retrieval) applied to a feature representation of size 4096 extracted from a layer in the net. The representations are further modified using simple augmentation techniques e.g. jittering. The results strongly suggest that features obtained from deep learning with convolutional nets should be the primary candidate in most visual recognition tasks.) <|cite_end|>. However, these high performing CNNs come with a large computational cost due to the use of chains of several convolutional layers, often requiring implementations on GPUs <|cite_start|> (Reference: ImageNet classification with deep convolutional neural
networks: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.) <|cite_end|> or highly optimized distributed CPU architectures <|cite_start|> (Reference: Improving the Speed of Neural Networks on CPUs: Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters suitable for a number of applications that require real-time processing. The sheer size of these networks can represent a challenging computational burden, even for modern CPUs. For this reason, GPUs are routinely used instead to train and run such networks. This paper is a tutorial for students and researchers on some of the techniques that can be used to reduce this computational cost considerably on modern x86 CPUs. We emphasize data layout, batching of the computation, the use of SSE2 instructions, and particularly leverage SSSE3 and SSE4 fixed-point instructions which provide a 3× improvement over an optimized floating-point baseline. We use speech recognition as an example task, and show that a real-time hybrid hidden Markov model / neural network (HMM/NN) large vocabulary system can be built with a 10× speedup over an unoptimized baseline and a 4× speedup over an aggressively optimized floating-point baseline at no cost in accuracy. The techniques described extend readily to neural network training and provide an effective alternative to the use of specialized hardware.) <|cite_end|> to process large datasets. The increasing use of these networks for detection in sliding window approaches <|cite_start|> (Reference: OverFeat: Integrated Recognition, Localization and Detection using Convolutional Networks: We present an integrated framework for using Convolutional Networks for classification, localization and detection. We show how a multiscale and sliding window approach can be efficiently implemented within a ConvNet. We also introduce a novel deep learning approach to localization by learning to predict object boundaries. Bounding boxes are then accumulated rather than suppressed in order to increase detection confidence. We show that different tasks can be learned simultaneously using a single shared network. This integrated framework is the winner of the localization task of the ImageNet Large Scale Visual Recognition Challenge 2013 (ILSVRC2013) and obtained very competitive results for the detection and classifications tasks. In post-competition work, we establish a new state of the art for the detection task. Finally, we release a feature extractor from our best model called OverFeat.) <|cite_end|> <|cite_start|> (Reference: Scene Parsing with Multiscale Feature Learning, Purity Trees, and Optimal Covers: Scene parsing, or semantic segmentation, consists in labeling each pixel in an image with the category of the object it belongs to. It is a challenging task that involves the simultaneous detection, segmentation and recognition of all the objects in the image. The scene parsing method proposed here starts by computing a tree of segments from a graph of pixel dissimilarities. Simultaneously, a set of dense feature vectors is computed which encodes regions of multiple sizes centered on each pixel. The feature extractor is a multiscale convolutional network trained from raw pixels. The feature vectors associated with the segments covered by each node in the tree are aggregated and fed to a classifier which produces an estimate of the distribution of object categories contained in the segment. A subset of tree nodes that cover the image are then selected so as to maximize the average "purity" of the class distributions, hence maximizing the overall likelihood that each segment will contain a single object. The convolutional network feature extractor is trained end-to-end from raw pixels, alleviating the need for engineered features. After training, the system is parameter free. The system yields record accuracies on the Stanford Background Dataset (8 classes), the Sift Flow Dataset (33 classes) and the Barcelona Dataset (170 classes) while being an order of magnitude faster than competing approaches, producing a 320 \times 240 image labeling in less than 1 second.) <|cite_end|> <|cite_start|> (Reference: {Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks: Convolutional neural networks (CNN) have recently shown outstanding image classification performance in the large- scale visual recognition challenge (ILSVRC2012). The success of CNNs is attributed to their ability to learn rich mid-level image representations as opposed to hand-designed low-level features used in other image classification methods. Learning CNNs, however, amounts to estimating millions of parameters and requires a very large number of annotated image samples. This property currently prevents application of CNNs to problems with limited training data. In this work we show how image representations learned with CNNs on large-scale annotated datasets can be efficiently transferred to other visual recognition tasks with limited amount of training data. We design a method to reuse layers trained on the ImageNet dataset to compute mid-level image representation for images in the PASCAL VOC dataset. We show that despite differences in image statistics and tasks in the two datasets, the transferred representation leads to significantly improved results for object and action classification, outperforming the current state of the art on Pascal VOC 2007 and 2012 datasets. We also show promising results for object and action localization.) <|cite_end|> and the desire to apply CNNs in real-world systems means the speed of inference becomes an important factor for applications. In this paper we introduce an easy-to-implement method for significantly speeding up pre-trained CNNs requiring minimal modifications to existing frameworks. There can be a small associated loss in performance, but this is tunable to a desired accuracy level. For example, we show that a 4.5$\times$ speedup can still give state-of-the-art performance in our example application of character recognition.
While a few other CNN acceleration methods exist, our {\bf key insight is to exploit the redundancy that exists between different feature channels and filters} <|cite_start|> (Reference: Predicting Parameters in Deep Learning: We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95% of the weights of a network without any drop in accuracy.) <|cite_end|>. We contribute two approximation schemes to do so (Sect.~\ref{sec:separable}) and two optimization methods for each scheme (Sect.~\ref{sec:opt}). Both schemes are orthogonal to other architecture-specific optimizations and can be easily applied to existing CPU and GPU software. Performance is evaluated empirically in Sect.~\ref{sec:exp} and results are summarized in Sect~\ref{sec:conc}.
\paragraph{Related work.}
There are only a few general speedup methods for CNNs. Denton~\etal <|cite_start|> (Reference: Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation: We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the linear structure present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2x, while keeping the accuracy within 1% of the original model.) <|cite_end|> use low rank approximations and clustering of filters achieving 1.6$\times$ speedup of single convolutional layers (not of the whole network) with a 1\% drop in classification accuracy. Mamalet~\etal <|cite_start|> (Reference: Simplifying ConvNets for Fast Learning: ) <|cite_end|> design the network to use rank-1 filters from the outset and combine them with an average pooling layer; however, the technique cannot be applied to general network designs. Vanhoucke~\etal <|cite_start|> (Reference: Improving the Speed of Neural Networks on CPUs: Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters suitable for a number of applications that require real-time processing. The sheer size of these networks can represent a challenging computational burden, even for modern CPUs. For this reason, GPUs are routinely used instead to train and run such networks. This paper is a tutorial for students and researchers on some of the techniques that can be used to reduce this computational cost considerably on modern x86 CPUs. We emphasize data layout, batching of the computation, the use of SSE2 instructions, and particularly leverage SSSE3 and SSE4 fixed-point instructions which provide a 3× improvement over an optimized floating-point baseline. We use speech recognition as an example task, and show that a real-time hybrid hidden Markov model / neural network (HMM/NN) large vocabulary system can be built with a 10× speedup over an unoptimized baseline and a 4× speedup over an aggressively optimized floating-point baseline at no cost in accuracy. The techniques described extend readily to neural network training and provide an effective alternative to the use of specialized hardware.) <|cite_end|> show that 8-bit quantization of the layer weights can result in a speedup with minimal loss of accuracy. Not specific to CNNs, Rigamonti~\etal <|cite_start|> (Reference: Learning separable filters: Learning filters to produce sparse image representations in terms of over complete dictionaries has emerged as a powerful way to create image features for many different purposes. Unfortunately, these filters are usually both numerous and non-separable, making their use computationally expensive. In this paper, we show that such filters can be computed as linear combinations of a smaller number of separable ones, thus greatly reducing the computational complexity at no cost in terms of performance. This makes filter learning approaches practical even for large images or 3D volumes, and we show that we significantly outperform state-of-the-art methods on the linear structure extraction task, in terms of both accuracy and speed. Moreover, our approach is general and can be used on generic filter banks to reduce the complexity of the convolutions.) <|cite_end|> show that multiple image filters can be approximated by a shared set of separable (rank-1) filters, allowing large speedups with minimal loss in accuracy.
Moving to hardware-specific optimizations, \texttt{cuda-convnet} <|cite_start|> (Reference: ImageNet classification with deep convolutional neural
networks: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called "dropout" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry.) <|cite_end|> and \texttt{Caffe} show that highly optimized CPU and GPU code can give superior computational performance in CNNs. <|cite_start|> (Reference: Fast Training of Convolutional Networks through FFTs: Convolutional networks are one of the most widely employed architectures in computer vision and machine learning. In order to leverage their ability to learn complex functions, large amounts of data are required for training. Training a large convolutional network to produce state-of-the-art results can take weeks, even when using modern GPUs. Producing labels using a trained network can also be costly when dealing with web-scale datasets. In this work, we present a simple algorithm which accelerates training and inference by a significant factor, and can yield improvements of over an order of magnitude compared to existing state-of-the-art implementations. This is done by computing convolutions as pointwise products in the Fourier domain while reusing the same transformed feature map many times. The algorithm is implemented on a GPU architecture and addresses a number of related challenges.) <|cite_end|> performs convolutions in the Fourier domain through FFTs computed efficiently over batches of images on a GPU. Other methods from <|cite_start|> (Reference: Improving the Speed of Neural Networks on CPUs: Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters suitable for a number of applications that require real-time processing. The sheer size of these networks can represent a challenging computational burden, even for modern CPUs. For this reason, GPUs are routinely used instead to train and run such networks. This paper is a tutorial for students and researchers on some of the techniques that can be used to reduce this computational cost considerably on modern x86 CPUs. We emphasize data layout, batching of the computation, the use of SSE2 instructions, and particularly leverage SSSE3 and SSE4 fixed-point instructions which provide a 3× improvement over an optimized floating-point baseline. We use speech recognition as an example task, and show that a real-time hybrid hidden Markov model / neural network (HMM/NN) large vocabulary system can be built with a 10× speedup over an unoptimized baseline and a 4× speedup over an aggressively optimized floating-point baseline at no cost in accuracy. The techniques described extend readily to neural network training and provide an effective alternative to the use of specialized hardware.) <|cite_end|> show that specific CPU architectures can be taken advantage of, \eg by using SSSE3 and SSSE4 fixed-point instructions and appropriate alignment of data in memory. Farabet~\etal <|cite_start|> (Reference: Large-Scale FPGA-based Convolutional Networks: Micro-robots, unmanned aerial vehicles, imaging sensor networks, wireless phones, and other embedded vision systems all require low cost and high-speed implementations of synthetic vision systems capable of recognizing and categorizing objects in a scene. Many successful object recognition systems use dense features extracted on regularly spaced patches over the input image. The majority of the feature extraction systems have a common structure composed of a filter bank (generally based on oriented edge detectors or 2D Gabor functions), a nonlinear operation (quantization, winner-take-all, sparsification, normalization, and/or pointwise saturation), and finally a pooling operation (max, average, or histogramming). For example, the scale-invariant feature transform (SIFT) (Lowe, 2004) operator applies oriented edge filters to a small patch and determines the dominant orientation through a winner-take-all operation. Finally, the resulting sparse vectors are added (pooled) over a larger patch to form a local orientation histogram. Some recognition systems use a single stage of feature extractors (Lazebnik, Schmid, and Ponce, 2006; Dalal and Triggs, 2005; Berg, Berg, and Malik, 2005; Pinto, Cox, and DiCarlo, 2008). Other models such as HMAX-type models (Serre, Wolf, and Poggio, 2005; Mutch, and Lowe, 2006) and convolutional networks use two more layers of successive feature extractors. Different training algorithms have been used for learning the parameters of convolutional networks. In LeCun et al. (1998b) and Huang and LeCun (2006), pure supervised learning is used to update the parameters. However, recent works have focused on training with an auxiliary task (Ahmed et al., 2008) or using unsupervised objectives (Ranzato et al., 2007b; Kavukcuoglu et al., 2009; Jarrett et al., 2009; Lee et al., 2009).) <|cite_end|> show that using bespoke FPGA implementations of CNNs can greatly increase processing speed.
To speed up test-time in a sliding window context for a CNN, <|cite_start|> (Reference: DenseNet: Implementing Efficient ConvNet Descriptor Pyramids: Convolutional Neural Networks (CNNs) can provide accurate object classification. They can be extended to perform object detection by iterating over dense or selected proposed object regions. However, the runtime of such detectors scales as the total number and/or area of regions to examine per image, and training such detectors may be prohibitively slow. However, for some CNN classifier topologies, it is possible to share significant work among overlapping regions to be classified. This paper presents DenseNet, an open source system that computes dense, multiscale features from the convolutional layers of a CNN based object classifier. Future work will involve training efficient object detectors with DenseNet feature descriptors.) <|cite_end|> shows that multi-scale features can be computed efficiently by simply convolving the CNN across a flattened multi-scale pyramid. Furthermore search space reduction techniques such as selective search <|cite_start|> (Reference: Segmentation as selective search for object recognition: For object recognition, the current state-of-the-art is based on exhaustive search. However, to enable the use of more expensive features and classifiers and thereby progress beyond the state-of-the-art, a selective search strategy is needed. Therefore, we adapt segmentation as a selective search by reconsidering segmentation: We propose to generate many approximate locations over few and precise object delineations because (1) an object whose location is never generated can not be recognised and (2) appearance and immediate nearby context are most effective for object recognition. Our method is class-independent and is shown to cover 96.7% of all objects in the Pascal VOC 2007 test set using only 1,536 locations per image. Our selective search enables the use of the more expensive bag-of-words method which we use to substantially improve the state-of-the-art by up to 8.5% for 8 out of 20 classes on the Pascal VOC 2010 detection challenge.) <|cite_end|> drastically cut down the number of times a full forward pass of the CNN must be computed by cheaply identifying a small number of candidate object locations in the image.
Note, the
methods we proposed are not specific to any processing architecture and can be
combined with many of the other speedup methods given above.
\nopagebreak <|paper_end|> | [
"<|reference_start|> ImageNet classification with deep convolutional neural\nnetworks: We trained a large, deep convolutional neural network to classify the 1.2 million high-resolution images in the ImageNet LSVRC-2010 contest into the 1000 different classes. On the test data, we achieved top-1 and top-5 error rates of 37.5% and 17.0%, respectively, which is considerably better than the previous state-of-the-art. The neural network, which has 60 million parameters and 650,000 neurons, consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully connected layers with a final 1000-way softmax. To make training faster, we used non-saturating neurons and a very efficient GPU implementation of the convolution operation. To reduce overfitting in the fully connected layers we employed a recently developed regularization method called \"dropout\" that proved to be very effective. We also entered a variant of this model in the ILSVRC-2012 competition and achieved a winning top-5 test error rate of 15.3%, compared to 26.2% achieved by the second-best entry. <|reference_end|>",
"<|reference_start|> DeepPose: Human Pose Estimation via Deep Neural Networks: We propose a method for human pose estimation based on Deep Neural Networks (DNNs). The pose estimation is formulated as a DNN-based regression problem towards body joints. We present a cascade of such DNN regressors which results in high precision pose estimates. The approach has the advantage of reasoning about pose in a holistic fashion and has a simple but yet powerful formulation which capitalizes on recent advances in Deep Learning. We present a detailed empirical analysis with state-of-art or better performance on four academic benchmarks of diverse real-world images. <|reference_end|>",
"<|reference_start|> Exploiting Linear Structure Within Convolutional Networks for Efficient Evaluation: We present techniques for speeding up the test-time evaluation of large convolutional networks, designed for object recognition tasks. These models deliver impressive accuracy but each image evaluation requires millions of floating point operations, making their deployment on smartphones and Internet-scale clusters problematic. The computation is dominated by the convolution operations in the lower layers of the model. We exploit the linear structure present within the convolutional filters to derive approximations that significantly reduce the required computation. Using large state-of-the-art models, we demonstrate we demonstrate speedups of convolutional layers on both CPU and GPU by a factor of 2x, while keeping the accuracy within 1% of the original model. <|reference_end|>",
"<|reference_start|> DenseNet: Implementing Efficient ConvNet Descriptor Pyramids: Convolutional Neural Networks (CNNs) can provide accurate object classification. They can be extended to perform object detection by iterating over dense or selected proposed object regions. However, the runtime of such detectors scales as the total number and/or area of regions to examine per image, and training such detectors may be prohibitively slow. However, for some CNN classifier topologies, it is possible to share significant work among overlapping regions to be classified. This paper presents DenseNet, an open source system that computes dense, multiscale features from the convolutional layers of a CNN based object classifier. Future work will involve training efficient object detectors with DenseNet feature descriptors. <|reference_end|>"
] | [
0,
2,
13,
21
] | {"<|multi_cite_1_1|>": "ss-690198", "<|multi_cite_1_2|>": "ss-996299", "<|multi_cite_1_3|>": "arxiv-54130", "<|multi_cite_1_4|>": "arxiv-51204", "<|multi_cite_1_5|>": "arxiv-54338", "<|multi_cite_1_6|>": "arxiv-54395", "<|multi_cite_1_7|>": "arxiv-58497", "<|multi_cite_2_1|>": "ss-690198", "<|cite_3|>": "ss-1713975", "<|multi_cite_4_1|>": "arxiv-54395", "<|multi_cite_4_2|>": "arxiv-28595", "<|multi_cite_4_3|>": "ss-1113121", "<|cite_5|>": "arxiv-46476", "<|cite_6|>": "arxiv-58905", "<|cite_7|>": "ss-1221827", "<|cite_8|>": "ss-1713975", "<|cite_9|>": "ss-1287530", "<|cite_10|>": "ss-690198", "<|cite_12|>": "arxiv-54301", "<|cite_13|>": "ss-1713975", "<|cite_14|>": "ss-773905", "<|cite_15|>": "arxiv-59118", "<|cite_16|>": "ss-1836884"} |
1408.2303 | <|paper_start|> Title: Gabidulin Decoding via Minimal Bases of Linearized Polynomial Modules
Abstract: Gabidulin Decoding via Minimal Bases of Linearized Polynomial Modules: We show how Gabidulin codes can be decoded via parametrization by using interpolation modules over the ring of linearized polynomials with composition. Our decoding algorithm computes a list of message words that correspond to all closest codewords to a given received word. This involves the computation of a minimal basis for the interpolation module that corresponds to the received word, followed by a search through the parametrization for valid message words. Our module-theoretic approach strengthens the link between Gabidulin decoding and Reed-Solomon decoding. Two subalgorithms are presented to compute the minimal basis, one iterative, the other an extended Euclidean algorithm. Both of these subalgorithms have polynomial time complexity. The complexity order of the overall algorithm, using the parametrization, is then compared to straightforward exhaustive search as well as to chase list decoding.
Introduction
Over the last decade there has been increased interest in Gabidulin codes, mainly because of their relevance to network coding <|cite_start|> (Reference: Coding for Errors and Erasures in Random Network Coding: The problem of error-control in random linear network coding is considered. A ``noncoherent'' or ``channel oblivious'' model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that linear network coding is vector-space preserving, information transmission is modelled as the injection into the network of a basis for a vector space $V$ and the collection by the receiver of a basis for a vector space $U$. A metric on the projective geometry associated with the packet space is introduced, and it is shown that a minimum distance decoder for this metric achieves correct decoding if the dimension of the space $V \cap U$ is sufficiently large. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finite-field Grassmannian, or, equivalently, a subset of the vertices of the corresponding Grassmann graph. Sphere-packing and sphere-covering bounds as well as a generalization of the Singleton bound are provided for such codes. Finally, a Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ``list-1'' minimum distance decoding algorithm is provided.) <|cite_end|> <|cite_start|> (Reference: A Rank-Metric Approach to Error Control in Random Network Coding: The problem of error control in random linear network coding is addressed from a matrix perspective that is closely related to the subspace perspective of K\"otter and Kschischang. A large class of constant-dimension subspace codes is investigated. It is shown that codes in this class can be easily constructed from rank-metric codes, while preserving their distance properties. Moreover, it is shown that minimum distance decoding of such subspace codes can be reformulated as a generalized decoding problem for rank-metric codes where partial information about the error is available. This partial information may be in the form of erasures (knowledge of an error location but not its value) and deviations (knowledge of an error value but not its location). Taking erasures and deviations into account (when they occur) strictly increases the error correction capability of a code: if $\mu$ erasures and $\delta$ deviations occur, then errors of rank $t$ can always be corrected provided that $2t \leq d - 1 + \mu + \delta$, where $d$ is the minimum rank distance of the code. For Gabidulin codes, an important family of maximum rank distance codes, an efficient decoding algorithm is proposed that can properly exploit erasures and deviations. In a network coding application where $n$ packets of length $M$ over $F_q$ are transmitted, the complexity of the decoding algorithm is given by $O(dM)$ operations in an extension field $F_{q^n}$.) <|cite_end|>. Gabidulin codes are optimal rank-metric codes over a field $\F_{q^m}$ (where $q$ is a prime power). They were first derived by Gabidulin in and independently by Delsarte in <|cite_start|> (Reference: Bilinear Forms over a Finite Field, with Applications to Coding Theory: ) <|cite_end|>.
These codes can be seen as the $q$-analog of Reed-Solomon codes, using $q$-linearized polynomials instead of arbitrary polynomials. They are optimal in the sense that they are not only MDS codes with respect to the Hamming metric, but also achieve the Singleton bound with respect to the rank metric and are thus MRD codes. They are not only of interest in network coding but also in space-time coding <|cite_start|> (Reference: Maximum Rank Distance Codes as Space-Time Codes: The critical design criterion for space-time codes in asymptotically good channels is the minimum rank between codeword pairs. Rank codes are a two-dimensional matrix code construction where by the rank is the metric of merit. We look at the application of rank codes to space-time code design. In particular, we provide construction methods of full-rank codes over different complex signal constellations, for arbitrary numbers of antennas, and codeword periods. We also derive a Singleton-type bound on the rate of a code for the rank metric, and we show that rank codes satisfy this bound with equality.) <|cite_end|>, crisscross error correction <|cite_start|> (Reference: Author's Reply to Comments on 'Maximum-rank array codes and their application to crisscross error correction': A mu -(n*n,k) array code C over a field F is a k-dimensional linear space of n*n matrices over F such that every nonzero matrix in C has rank >or= mu . It is first shown that the dimension of such array codes must satisfy the Singleton-like bound k >) <|cite_end|> and distributed storage <|cite_start|> (Reference: Error Resilience in Distributed Storage via Rank-Metric Codes: This paper presents a novel coding scheme for distributed storage systems containing nodes with adversarial errors. The key challenge in such systems is the propagation of erroneous data from a single corrupted node to the rest of the system during a node repair process. This paper presents a concatenated coding scheme which is based on two types of codes: maximum rank distance (MRD) code as an outer code and optimal repair maximal distance separable (MDS) array code as an inner code. Given this, two different types of adversarial errors are considered: the first type considers an adversary that can replace the content of an affected node only once; while the second attack-type considers an adversary that can pollute data an unbounded number of times. This paper proves that the proposed coding scheme attains a suitable upper bound on resilience capacity for the first type of error. Further, the paper presents mechanisms that combine this code with subspace signatures to achieve error resilience for the second type of errors. Finally, the paper concludes by presenting a construction based on MRD codes for optimal locally repairable scalar codes that can tolerate adversarial errors.) <|cite_end|>.
The decoding of Gabidulin codes has obtained a fair amount of attention in the literature, starting with work on decoding within the unique decoding radius in <|cite_start|> (Reference: A Fast Matrix Decoding Algorithm for Rank-Error-Correcting Codes: ) <|cite_end|> and more recently <|cite_start|> (Reference: A Welch-Berlekamp Like Algorithm for Decoding Gabidulin Codes: ) <|cite_end|> <|cite_start|> (Reference: Fast decoding of rank-codes with rank errors and column erasures: This paper describes the decoding of Rank-Codes with different decoding algorithms. A new modified Berlekamp-Massey algorithm for correcting rank errors and column erasures is described. These algorithms consist of two decoding steps. The first step is the puncturing of the code and the decoding in the punctured code. The second step is the column erasure decoding in the original code. Thus decoding step is about half as complex as the known algorithms) <|cite_end|> <|cite_start|> (Reference: Decoding Interleaved Gabidulin Codes and Multisequence Linearized Shift-Register Synthesis: An interleaved Gabidulin code is the direct sum of ℓ Gabidulin codes. We propose an efficient decoding algorithm that corrects with high probability errors of rank up to ℓ over ℓ+1(d−1), where d is the rank distance of the interleaved code. The probability of decoding failure is estimated. The proposed decoding is based on a multisequence linearized shift-register synthesis algorithm, given in the paper. The time complexity of the decoding algorithm is O(ℓd2).) <|cite_end|> <|cite_start|> (Reference: Skew-Feedback Shift-Register Synthesis and Decoding Interleaved Gabidulin Codes: An efficient algorithm which synthesizes all shortest skew-feedback shift-registers (defined in the paper) generating L sequences of varying length over a field is derived and its correctness is proved. It generalizes the Berlekamp-Massey algorithm and some other algorithms, and has time complexity O(LN2), where N is the length of a longest sequence. The proposed algorithm can be applied for efficiently solving the key equation when decoding interleaved (or direct sum of) Gabidulin codes beyond half minimum distance. Those codes have many applications and, as shown by Kötter and Kschischang, can be used for random network coding.) <|cite_end|> <|cite_start|> (Reference: Fast Encoding and Decoding of Gabidulin Codes: Gabidulin codes are the rank-metric analogs of Reed-Solomon codes and have a major role in practical error control for network coding. This paper presents new encoding and decoding algorithms for Gabidulin codes based on low-complexity normal bases. In addition, a new decoding algorithm is proposed based on a transform-domain approach. Together, these represent the fastest known algorithms for encoding and decoding Gabidulin codes.) <|cite_end|> <|cite_start|> (Reference: Fast decoding of Gabidulin codes: ) <|cite_end|>. If $n$ is the length of the Gabidulin code and $k$ denotes the dimension of the code as a linear space over the field $\F_{q^m}$, the unique decoding radius is given by $(n-k)/2$.
Decoding beyond the unique decoding radius was addressed in e.g.\ <|cite_start|> (Reference: Decoding rank errors beyond the error correcting capability: HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Decoding rank errors beyond the error-correcting capability Pierre Loidreau) <|cite_end|> <|cite_start|> (Reference: List-decoding of Subspace Codes and Rank-Metric Codes up to Singleton Bound: Subspace codes and rank-metric codes can be used to correct errors and erasures in network, with linear network coding. Subspace codes were introduced by Koetter and Kschischang to correct errors and erasures in networks where topology is unknown (the noncoherent case). In a previous work, we have developed a family of subspace codes, based upon the Koetter-Kschichang construction, which are efficiently list decodable. Using these codes, we achieved a better decoding radius than Koetter-Kschischiang codes at low rates. Herein, we introduce a new family of subspace codes based upon a different approach which leads to a linear-algebraic list-decoding algorithm. The resulting error correction radius can be expressed as follows: for any integer $s$, our list-decoder using $s+1$-interpolation polynomials guarantees successful recovery of the message subspace provided the normalized dimension of errors is at most $s(1-sR)$. The same list-decoding algorithm can be used to correct erasures as well as errors. The size of output list is at most $Q^{s-1}$, where $Q$ is the size of the field that message symbols are chosen from. Rank-metric codes are suitable for error correction in the case where the network topology and the underlying network code are known (the coherent case). Gabidulin codes are a well-known class of algebraic rank-metric codes that meet the Singleton bound on the minimum rank metric of a code. In this paper, we introduce a folded version of Gabidulin codes analogous to the folded Reed-Solomon codes of Guruswami and Rudra along with a list-decoding algorithm for such codes. Our list-decoding algorithm makes it possible to recover the message provided that the normalized rank of error is at most $1-R-\epsilon$, for any $\epsilon > 0$. Notably this achieves the information theoretic bound on the decoding radius of a rank-metric code.) <|cite_end|> <|cite_start|> (Reference: Synthesizing all linearized shift-registers of the minimal or required length: An efficient algorithm synthesizing all q-linearized shift-registers of the minimal or required length generating a sequence of length N over a finite field IFqm is considered. This algorithm, which is a generalization of the Berlekamp-Massey algorithm, has time complexity ̃(N 2 ) operations in IFqm , and can be applied for efficient solving the key equation when decoding Gabidulin codes up to and beyond half the minimum rank distance .) <|cite_end|> <|cite_start|> (Reference: Bounds on List Decoding of Rank-Metric Codes: So far, there is no polynomial-time list decoding algorithm (beyond half the minimum distance) for Gabidulin codes. These codes can be seen as the rank-metric equivalent of Reed--Solomon codes. In this paper, we provide bounds on the list size of rank-metric codes in order to understand whether polynomial-time list decoding is possible or whether it works only with exponential time complexity. Three bounds on the list size are proven. The first one is a lower exponential bound for Gabidulin codes and shows that for these codes no polynomial-time list decoding beyond the Johnson radius exists. Second, an exponential upper bound is derived, which holds for any rank-metric code of length $n$ and minimum rank distance $d$. The third bound proves that there exists a rank-metric code over $\Fqm$ of length $n \leq m$ such that the list size is exponential in the length for any radius greater than half the minimum rank distance. This implies that there cannot exist a polynomial upper bound depending only on $n$ and $d$ similar to the Johnson bound in Hamming metric. All three rank-metric bounds reveal significant differences to bounds for codes in Hamming metric.) <|cite_end|> <|cite_start|> (Reference: Decoding of block and convolutional codes in rank metric: Rank-metric codes recently attract a lot of attention due to their possible application to network coding, cryptography, space-time coding and distributed storage. An optimal-cardinality algebraic code construction in rank metric was introduced some decades ago by Delsarte, Gabidulin and Roth. This Reed–Solomon-like code class is based on the evaluation of linearized polynomials and is nowadays called Gabidulin codes. This dissertation considers block and convolutional codes in rank metric with the objective of designing and investigating efficient decoding algorithms for both code classes. After giving a brief introduction to codes in rank metric and their properties, we first derive sub-quadratic-time algorithms for operations with linearized polynomials and state a new bounded minimum distance decoding algorithm for Gabidulin codes. This algorithm directly outputs the linearized evaluation polynomial of the estimated codeword by means of the (fast) linearized Euclidean algorithm. Second, we present a new interpolation-based algorithm for unique and (not necessarily polynomial-time) list decoding of interleaved Gabidulin codes. This algorithm decodes most error patterns of rank greater than half the minimum rank distance by efficiently solving two linear systems of equations. As a third topic, we investigate the possibilities of polynomial-time list decoding of rank-metric codes in general and Gabidulin codes in particular. For this purpose, we derive three bounds on the list size. These bounds show that the behavior of the list size for both, Gabidulin and rank-metric block codes in general, is significantly different from the behavior of Reed–Solomon codes and block codes in Hamming metric, respectively. The bounds imply, amongst others, that there exists no polynomial upper bound on the list size in rank metric as the Johnson bound in Hamming metric, which depends only on the length and the minimum rank distance of the code. Finally, we introduce a special class of convolutional codes in rank metric and propose an efficient decoding algorithm for these codes. These convolutional codes are (partial) unit memory codes, built upon rank-metric block codes. This structure is crucial in the decoding process since we exploit the efficient decoders of the underlying block codes in order to decode the convolutional code.) <|cite_end|>. In this case, one speaks of \emph{list-decoding}, i.e.\ finding all codewords within a given radius to the received word. A main open question is whether Gabidulin codes can be list decoded efficiently. This paper seeks to contribute to current research efforts on this open question.
In <|cite_start|> (Reference: Bounds on List Decoding of Rank-Metric Codes: So far, there is no polynomial-time list decoding algorithm (beyond half the minimum distance) for Gabidulin codes. These codes can be seen as the rank-metric equivalent of Reed--Solomon codes. In this paper, we provide bounds on the list size of rank-metric codes in order to understand whether polynomial-time list decoding is possible or whether it works only with exponential time complexity. Three bounds on the list size are proven. The first one is a lower exponential bound for Gabidulin codes and shows that for these codes no polynomial-time list decoding beyond the Johnson radius exists. Second, an exponential upper bound is derived, which holds for any rank-metric code of length $n$ and minimum rank distance $d$. The third bound proves that there exists a rank-metric code over $\Fqm$ of length $n \leq m$ such that the list size is exponential in the length for any radius greater than half the minimum rank distance. This implies that there cannot exist a polynomial upper bound depending only on $n$ and $d$ similar to the Johnson bound in Hamming metric. All three rank-metric bounds reveal significant differences to bounds for codes in Hamming metric.) <|cite_end|> it was shown that, beyond the Johnson radius $n-\sqrt{kn}$, list decoding with a polynomial size list of codewords is not possible.
This raises the question up to which radius list decoding with a polynomial list size \emph{is} possible.
Recent results <|cite_start|> (Reference: Explicit rank-metric codes list-decodable with optimal redundancy: We construct an explicit family of linear rank-metric codes over any field ${\mathbb F}_h$ that enables efficient list decoding up to a fraction $\rho$ of errors in the rank metric with a rate of $1-\rho-\epsilon$, for any desired $\rho \in (0,1)$ and $\epsilon > 0$. Previously, a Monte Carlo construction of such codes was known, but this is in fact the first explicit construction of positive rate rank-metric codes for list decoding beyond the unique decoding radius. Our codes are subcodes of the well-known Gabidulin codes, which encode linearized polynomials of low degree via their values at a collection of linearly independent points. The subcode is picked by restricting the message polynomials to an ${\mathbb F}_h$-subspace that evades the structured subspaces over an extension field ${\mathbb F}_{h^t}$ that arise in the linear-algebraic list decoder for Gabidulin codes due to Guruswami and Xing (STOC'13). This subspace is obtained by combining subspace designs contructed by Guruswami and Kopparty (FOCS'13) with subspace evasive varieties due to Dvir and Lovett (STOC'12). We establish a similar result for subspace codes, which are a collection of subspaces, every pair of which have low-dimensional intersection, and which have received much attention recently in the context of network coding. We also give explicit subcodes of folded Reed-Solomon (RS) codes with small folding order that are list-decodable (in the Hamming metric) with optimal redundancy, motivated by the fact that list decoding RS codes reduces to list decoding such folded RS codes. However, as we only list decode a subcode of these codes, the Johnson radius continues to be the best known error fraction for list decoding RS codes.) <|cite_end|> <|cite_start|> (Reference: List Decoding Reed-Solomon, Algebraic-Geometric, and Gabidulin Subcodes up to the Singleton Bound: We consider Reed-Solomon (RS) codes whose evaluation points belong to a subfield, and give a linear-algebraic list decoding algorithm that can correct a fraction of errors approaching the code distance, while pinning down the candidate messages to a well-structured affine space of dimension a constant factor smaller than the code dimension. By pre-coding the message polynomials into a subspace-evasive set, we get a Monte Carlo construction of a subcode of Reed-Solomon codes that can be list decoded from a fraction (1-R-ε) of errors in polynomial time (for any fixed ε > 0) with a list size of O(1/ε). Our methods extend to algebraic-geometric (AG) codes, leading to a similar claim over constant-sized alphabets. This matches parameters of recent results based on folded variants of RS and AG codes. but our construction here gives subcodes of Reed-Solomon and AG codes themselves (albeit with restrictions on the evaluation points).
Further, the underlying algebraic idea also extends nicely to Gabidulin's construction of rank-metric codes based on linearized polynomials. This gives the first construction of positive rate rank-metric codes list decodable beyond half the distance, and in fact gives codes of rate R list decodable up to the optimal (1-R-ε) fraction of rank errors. A similar claim holds for the closely related subspace codes studied by Koetter and Kschischang.
We introduce a new notion called subspace designs as another way to pre-code messages and prune the subspace of candidate solutions. Using these, we also get a deterministic construction of a polynomial time list decodable subcode of RS codes. By using a cascade of several subspace designs, we extend our approach to AG codes, which gives the first deterministic construction of an algebraic code family of rate R with efficient list decoding from 1-R-ε fraction of errors over an alphabet of constant size (that depends only on ε). The list size bound is almost a constant (governed by log* (block length)), and the code can be constructed in quasi-polynomial time.) <|cite_end|> show an explicit construction of rank-metric codes, constructed as subcodes of Gabidulin codes, that can be list-decoded in polynomial time up to a certain radius beyond the unique decoding radius. This motivates further research of what happens in the original Gabidulin setting between the unique and the Johnson radius.
A closely related family of codes is the one of lifted Gabidulin codes <|cite_start|> (Reference: A Rank-Metric Approach to Error Control in Random Network Coding: The problem of error control in random linear network coding is addressed from a matrix perspective that is closely related to the subspace perspective of K\"otter and Kschischang. A large class of constant-dimension subspace codes is investigated. It is shown that codes in this class can be easily constructed from rank-metric codes, while preserving their distance properties. Moreover, it is shown that minimum distance decoding of such subspace codes can be reformulated as a generalized decoding problem for rank-metric codes where partial information about the error is available. This partial information may be in the form of erasures (knowledge of an error location but not its value) and deviations (knowledge of an error value but not its location). Taking erasures and deviations into account (when they occur) strictly increases the error correction capability of a code: if $\mu$ erasures and $\delta$ deviations occur, then errors of rank $t$ can always be corrected provided that $2t \leq d - 1 + \mu + \delta$, where $d$ is the minimum rank distance of the code. For Gabidulin codes, an important family of maximum rank distance codes, an efficient decoding algorithm is proposed that can properly exploit erasures and deviations. In a network coding application where $n$ packets of length $M$ over $F_q$ are transmitted, the complexity of the decoding algorithm is given by $O(dM)$ operations in an extension field $F_{q^n}$.) <|cite_end|>. These codes are sets of vector spaces
and can be used for non-coherent (also called random) network coding <|cite_start|> (Reference: Coding for Errors and Erasures in Random Network Coding: The problem of error-control in random linear network coding is considered. A ``noncoherent'' or ``channel oblivious'' model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that linear network coding is vector-space preserving, information transmission is modelled as the injection into the network of a basis for a vector space $V$ and the collection by the receiver of a basis for a vector space $U$. A metric on the projective geometry associated with the packet space is introduced, and it is shown that a minimum distance decoder for this metric achieves correct decoding if the dimension of the space $V \cap U$ is sufficiently large. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finite-field Grassmannian, or, equivalently, a subset of the vertices of the corresponding Grassmann graph. Sphere-packing and sphere-covering bounds as well as a generalization of the Singleton bound are provided for such codes. Finally, a Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ``list-1'' minimum distance decoding algorithm is provided.) <|cite_end|>. Unique decoding of lifted Gabidulin codes was investigated in e.g.\ <|cite_start|> (Reference: Coding for Errors and Erasures in Random Network Coding: The problem of error-control in random linear network coding is considered. A ``noncoherent'' or ``channel oblivious'' model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that linear network coding is vector-space preserving, information transmission is modelled as the injection into the network of a basis for a vector space $V$ and the collection by the receiver of a basis for a vector space $U$. A metric on the projective geometry associated with the packet space is introduced, and it is shown that a minimum distance decoder for this metric achieves correct decoding if the dimension of the space $V \cap U$ is sufficiently large. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finite-field Grassmannian, or, equivalently, a subset of the vertices of the corresponding Grassmann graph. Sphere-packing and sphere-covering bounds as well as a generalization of the Singleton bound are provided for such codes. Finally, a Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ``list-1'' minimum distance decoding algorithm is provided.) <|cite_end|> <|cite_start|> (Reference: A Rank-Metric Approach to Error Control in Random Network Coding: The problem of error control in random linear network coding is addressed from a matrix perspective that is closely related to the subspace perspective of K\"otter and Kschischang. A large class of constant-dimension subspace codes is investigated. It is shown that codes in this class can be easily constructed from rank-metric codes, while preserving their distance properties. Moreover, it is shown that minimum distance decoding of such subspace codes can be reformulated as a generalized decoding problem for rank-metric codes where partial information about the error is available. This partial information may be in the form of erasures (knowledge of an error location but not its value) and deviations (knowledge of an error value but not its location). Taking erasures and deviations into account (when they occur) strictly increases the error correction capability of a code: if $\mu$ erasures and $\delta$ deviations occur, then errors of rank $t$ can always be corrected provided that $2t \leq d - 1 + \mu + \delta$, where $d$ is the minimum rank distance of the code. For Gabidulin codes, an important family of maximum rank distance codes, an efficient decoding algorithm is proposed that can properly exploit erasures and deviations. In a network coding application where $n$ packets of length $M$ over $F_q$ are transmitted, the complexity of the decoding algorithm is given by $O(dM)$ operations in an extension field $F_{q^n}$.) <|cite_end|>, whereas list-decoding of these codes was studied in <|cite_start|> (Reference: List Decoding of Lifted Gabidulin Codes via the Pl\"ucker Embedding: Codes in the Grassmannian have recently found an application in random network coding. All the codewords in such codes are subspaces of $\F_q^n$ with a given dimension. In this paper, we consider the problem of list decoding of a certain family of codes in the Grassmannian, called lifted Gabidulin codes. For this purpose we use the Pl\"ucker embedding of the Grassmannian. We describe a way of representing a subset of the Pl\"ucker coordinates of lifted Gabidulin codes as linear block codes. The union of the parity-check equations of these block codes and the equations which arise from the description of a ball around a subspace in the Pl\"ucker coordinates describe the list of codewords with distance less than a given parameter from the received word.) <|cite_end|> <|cite_start|> (Reference: General Linearized Polynomial Interpolation and Its Applications: In this paper, we first propose a general interpolation algorithm in a free module of a linearized polynomial ring, and then apply this algorithm to decode several important families of codes, Gabidulin codes, KK codes and MV codes. Our decoding algorithm for Gabidulin codes is different from the polynomial reconstruction algorithm by Loidreau. When applied to decode KK codes, our interpolation algorithm is equivalent to the Sudan-style list-1 decoding algorithm proposed by K/"otter and Kschischang for KK codes. The general interpolation approach is also capable of solving the interpolation problem for the list decoding of MV codes proposed by Mahdavifar and Vardy, and has a lower complexity than solving linear equations.) <|cite_end|>.
Using the close resemblance between Reed-Solomon codes and Gabidulin codes, the paper <|cite_start|> (Reference: A Welch-Berlekamp Like Algorithm for Decoding Gabidulin Codes: ) <|cite_end|> translates Gabidulin decoding into a set of polynomial interpolation conditions. Essentially, this setup is also used in the papers <|cite_start|> (Reference: Coding for Errors and Erasures in Random Network Coding: The problem of error-control in random linear network coding is considered. A ``noncoherent'' or ``channel oblivious'' model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that linear network coding is vector-space preserving, information transmission is modelled as the injection into the network of a basis for a vector space $V$ and the collection by the receiver of a basis for a vector space $U$. A metric on the projective geometry associated with the packet space is introduced, and it is shown that a minimum distance decoder for this metric achieves correct decoding if the dimension of the space $V \cap U$ is sufficiently large. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finite-field Grassmannian, or, equivalently, a subset of the vertices of the corresponding Grassmann graph. Sphere-packing and sphere-covering bounds as well as a generalization of the Singleton bound are provided for such codes. Finally, a Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ``list-1'' minimum distance decoding algorithm is provided.) <|cite_end|> <|cite_start|> (Reference: General Linearized Polynomial Interpolation and Its Applications: In this paper, we first propose a general interpolation algorithm in a free module of a linearized polynomial ring, and then apply this algorithm to decode several important families of codes, Gabidulin codes, KK codes and MV codes. Our decoding algorithm for Gabidulin codes is different from the polynomial reconstruction algorithm by Loidreau. When applied to decode KK codes, our interpolation algorithm is equivalent to the Sudan-style list-1 decoding algorithm proposed by K/"otter and Kschischang for KK codes. The general interpolation approach is also capable of solving the interpolation problem for the list decoding of MV codes proposed by Mahdavifar and Vardy, and has a lower complexity than solving linear equations.) <|cite_end|> that present iterative algorithms that perform Gabidulin list decoding with a list size of 1. In this paper we present an iterative algorithm that bears similarity to the ones in <|cite_start|> (Reference: Coding for Errors and Erasures in Random Network Coding: The problem of error-control in random linear network coding is considered. A ``noncoherent'' or ``channel oblivious'' model is assumed where neither transmitter nor receiver is assumed to have knowledge of the channel transfer characteristic. Motivated by the property that linear network coding is vector-space preserving, information transmission is modelled as the injection into the network of a basis for a vector space $V$ and the collection by the receiver of a basis for a vector space $U$. A metric on the projective geometry associated with the packet space is introduced, and it is shown that a minimum distance decoder for this metric achieves correct decoding if the dimension of the space $V \cap U$ is sufficiently large. If the dimension of each codeword is restricted to a fixed integer, the code forms a subset of a finite-field Grassmannian, or, equivalently, a subset of the vertices of the corresponding Grassmann graph. Sphere-packing and sphere-covering bounds as well as a generalization of the Singleton bound are provided for such codes. Finally, a Reed-Solomon-like code construction, related to Gabidulin's construction of maximum rank-distance codes, is described and a Sudan-style ``list-1'' minimum distance decoding algorithm is provided.) <|cite_end|> <|cite_start|> (Reference: A Welch-Berlekamp Like Algorithm for Decoding Gabidulin Codes: ) <|cite_end|> <|cite_start|> (Reference: General Linearized Polynomial Interpolation and Its Applications: In this paper, we first propose a general interpolation algorithm in a free module of a linearized polynomial ring, and then apply this algorithm to decode several important families of codes, Gabidulin codes, KK codes and MV codes. Our decoding algorithm for Gabidulin codes is different from the polynomial reconstruction algorithm by Loidreau. When applied to decode KK codes, our interpolation algorithm is equivalent to the Sudan-style list-1 decoding algorithm proposed by K/"otter and Kschischang for KK codes. The general interpolation approach is also capable of solving the interpolation problem for the list decoding of MV codes proposed by Mahdavifar and Vardy, and has a lower complexity than solving linear equations.) <|cite_end|> but yields {\em all} closest codewords rather than just one.
The latter is due to our parametrization approach. This approach enables us to strengthen the link between Gabidulin decoding and Reed-Solomon decoding. For the latter a parametrization approach was developed in <|cite_start|> (Reference: Reed-Solomon list decoding from a system theoretic perspective: In this paper, the Sudan-Guruswami approach to list decoding of Reed-Solomon (RS) codes is cast in a system-theoretic framework. With the data, a set of trajectories or time series is associated which is then modeled as a so-called behavior. In this way, a connection is made with the behavioral approach to system theory. It is shown how a polynomial representation of the modeling behavior gives rise to the bivariate interpolating polynomials of the Sudan-Guruswami approach. The concept of "weighted row reduced" is introduced and used to achieve minimality. Two decoding methods are derived and a parametrization of all bivariate interpolating polynomials is given.) <|cite_end|>.
The paper is structured as follows. In the next section we present several preliminaries on $q$-linearized polynomials, Gabidulin codes, the rank metric and we recall the polynomial interpolation conditions from <|cite_start|> (Reference: A Welch-Berlekamp Like Algorithm for Decoding Gabidulin Codes: ) <|cite_end|>. We also detail an iterative construction of the $q$-annihilator polynomial and the $q$-Lagrange polynomial. Section \ref{sec:modules} deals with modules over the ring of linearized polynomials and gives the Predictable Leading Monomial property for minimal bases of these modules. In Section \ref{sec:decoding} we reformulate the Gabidulin list decoding requirements in terms of a module represented by four $q$-linearized polynomials and present the decoding algorithm, which is based on a parametrization using the Predictable Leading Monomial property. For this we present two subalgorithms for computing a minimal basis of the interpolation module. Furthermore, we analyze the complexity of our algorithms. We conclude this paper in Section \ref{sec:conclusion}.
Preliminary short versions of this paper are conference papers <|cite_start|> (Reference: Iterative List-Decoding of Gabidulin Codes via Gr\"obner Based Interpolation: We show how Gabidulin codes can be list decoded by using an iterative parametrization approach. For a given received word, our decoding algorithm processes its entries one by one, constructing four polynomials at each step. This then yields a parametrization of interpolating solutions for the data so far. From the final result a list of all codewords that are closest to the received word with respect to the rank metric is obtained.) <|cite_end|> and. <|paper_end|> | [
"<|reference_start|> A Fast Matrix Decoding Algorithm for Rank-Error-Correcting Codes: <|reference_end|>",
"<|reference_start|> Decoding Interleaved Gabidulin Codes and Multisequence Linearized Shift-Register Synthesis: An interleaved Gabidulin code is the direct sum of ℓ Gabidulin codes. We propose an efficient decoding algorithm that corrects with high probability errors of rank up to ℓ over ℓ+1(d−1), where d is the rank distance of the interleaved code. The probability of decoding failure is estimated. The proposed decoding is based on a multisequence linearized shift-register synthesis algorithm, given in the paper. The time complexity of the decoding algorithm is O(ℓd2). <|reference_end|>",
"<|reference_start|> Bounds on List Decoding of Rank-Metric Codes: So far, there is no polynomial-time list decoding algorithm (beyond half the minimum distance) for Gabidulin codes. These codes can be seen as the rank-metric equivalent of Reed--Solomon codes. In this paper, we provide bounds on the list size of rank-metric codes in order to understand whether polynomial-time list decoding is possible or whether it works only with exponential time complexity. Three bounds on the list size are proven. The first one is a lower exponential bound for Gabidulin codes and shows that for these codes no polynomial-time list decoding beyond the Johnson radius exists. Second, an exponential upper bound is derived, which holds for any rank-metric code of length $n$ and minimum rank distance $d$. The third bound proves that there exists a rank-metric code over $\\Fqm$ of length $n \\leq m$ such that the list size is exponential in the length for any radius greater than half the minimum rank distance. This implies that there cannot exist a polynomial upper bound depending only on $n$ and $d$ similar to the Johnson bound in Hamming metric. All three rank-metric bounds reveal significant differences to bounds for codes in Hamming metric. <|reference_end|>",
"<|reference_start|> A Welch-Berlekamp Like Algorithm for Decoding Gabidulin Codes: <|reference_end|>"
] | [
6,
9,
18,
31
] | {"<|multi_cite_1_1|>": "arxiv-675789", "<|multi_cite_1_2|>": "arxiv-1740", "<|cite_3|>": "ss-1529950", "<|cite_4|>": "ss-1005858", "<|cite_5|>": "ss-1129086", "<|cite_6|>": "arxiv-28374", "<|multi_cite_7_2|>": "ss-1911597", "<|multi_cite_8_1|>": "ss-929455", "<|multi_cite_8_2|>": "ss-1687342", "<|multi_cite_8_3|>": "ss-1832821", "<|multi_cite_8_4|>": "ss-1832822", "<|multi_cite_8_5|>": "arxiv-6070", "<|multi_cite_8_6|>": "ss-1447389", "<|multi_cite_9_1|>": "ss-1719872", "<|multi_cite_9_2|>": "arxiv-28387", "<|multi_cite_9_3|>": "ss-2010492", "<|multi_cite_9_4|>": "arxiv-40552", "<|multi_cite_9_5|>": "ss-2186575", "<|cite_10|>": "arxiv-40552", "<|multi_cite_11_1|>": "arxiv-53251", "<|multi_cite_11_2|>": "ss-841581", "<|cite_12|>": "arxiv-1740", "<|cite_13|>": "arxiv-675789", "<|multi_cite_14_1|>": "arxiv-675789", "<|multi_cite_14_2|>": "arxiv-1740", "<|multi_cite_15_1|>": "arxiv-40088", "<|multi_cite_15_2|>": "arxiv-20864", "<|cite_16|>": "ss-929455", "<|multi_cite_17_1|>": "arxiv-675789", "<|multi_cite_17_2|>": "arxiv-20864", "<|multi_cite_18_1|>": "arxiv-675789", "<|multi_cite_18_2|>": "ss-929455", "<|multi_cite_18_3|>": "arxiv-20864", "<|cite_19|>": "ss-1024505", "<|cite_20|>": "ss-929455", "<|cite_21|>": "arxiv-61488"} |
2406.07835-0 | <|paper_start|> Title: SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature
Abstract: SciRIFF: A Resource to Enhance Language Model Instruction-Following over Scientific Literature: We present SciRIFF (Scientific Resource for Instruction-Following and Finetuning), a dataset of 137K instruction-following demonstrations for 54 tasks covering five essential scientific literature understanding capabilities: information extraction, summarization, question answering, claim verification, and classification. SciRIFF demonstrations are notable for their long input contexts, detailed task specifications, and complex structured outputs. While instruction-following resources are available in specific domains such as clinical medicine and chemistry, SciRIFF is the first dataset focused on extracting and synthesizing information from research literature across a wide range of scientific fields. To demonstrate the utility of SciRIFF, we develop a sample-efficient strategy to adapt a general instruction-following model for science by performing additional finetuning on a mix of general-domain and SciRIFF demonstrations. In evaluations on nine held-out scientific tasks, our model -- called SciTulu -- improves over a strong LLM baseline by 28.1% and 6.5% at the 7B and 70B scales respectively, while maintaining general instruction-following performance within 2% of the baseline. We are optimistic that SciRIFF will facilitate the development and evaluation of LLMs to help researchers navigate the ever-growing body of scientific literature. We release our dataset, model checkpoints, and data processing and evaluation code to enable further research.
Introduction
Large language models (LLMs) have the potential to advance scientific progress by helping researchers navigate and draw insights from the scientific literature.
To accomplish these tasks, LLMs must be able to reliably follow a range of \emph{instructions}---e.g. to extract information, summarize content, or answer questions---when given research articles as input.
These instructions will often feature long input contexts, such as an entire research article. In addition, the model's responses may need to be \emph{structured} according to a specific format or schema that supports aggregation for literature review <|cite_start|> (Reference: Toward systematic review automation: a practical guide to using machine learning tools in research synthesis: ) <|cite_end|>, or is consumable by software components like augmented reading interfaces <|cite_start|> (Reference: The Semantic Reader Project: Augmenting Scholarly Documents through AI-Powered Interactive Reading Interfaces: Scholarly publications are key to the transfer of knowledge from scholars to others. However, research papers are information-dense, and as the volume of the scientific literature grows, the need for new technology to support the reading process grows. In contrast to the process of finding papers, which has been transformed by Internet technology, the experience of reading research papers has changed little in decades. The PDF format for sharing research papers is widely used due to its portability, but it has significant downsides including: static content, poor accessibility for low-vision readers, and difficulty reading on mobile devices. This paper explores the question "Can recent advances in AI and HCI power intelligent, interactive, and accessible reading interfaces -- even for legacy PDFs?" We describe the Semantic Reader Project, a collaborative effort across multiple institutions to explore automatic creation of dynamic reading interfaces for research papers. Through this project, we've developed ten research prototype interfaces and conducted usability studies with more than 300 participants and real-world users showing improved reading experiences for scholars. We've also released a production reading interface for research papers that will incorporate the best features as they mature. We structure this paper around challenges scholars and the public face when reading research papers -- Discovery, Efficiency, Comprehension, Synthesis, and Accessibility -- and present an overview of our progress and remaining open challenges.) <|cite_end|> <|cite_start|> (Reference: Relatedly: Scaffolding Literature Reviews with Existing Related Work Sections: Scholars who want to research a scientific topic must take time to read, extract meaning, and identify connections across many papers. As scientific literature grows, this becomes increasingly challenging. Meanwhile, authors summarize prior research in papers' related work sections, though this is scoped to support a single paper. A formative study found that while reading multiple related work paragraphs helps overview a topic, it is hard to navigate overlapping and diverging references and research foci. In this work, we design a system, Relatedly, that scaffolds exploring and reading multiple related work paragraphs on a topic, with features including dynamic re-ranking and highlighting to spotlight unexplored dissimilar information, auto-generated descriptive paragraph headings, and low-lighting of redundant information. From a within-subjects user study (n=15), we found that scholars generate more coherent, insightful, and comprehensive topic outlines using Relatedly compared to a baseline paper list.) <|cite_end|>.
While bespoke models are available for specific scientific literature understanding tasks, models that can flexibly follow instructions are preferable both for their ease of use (offering a unified input / output interface) and for their ability to generalize to novel applications and settings.
The general instruction-following capabilities of LLMs have advanced rapidly in recent years, largely due to the availability of general-purpose instruction datasets <|cite_start|> (Reference: Instruction Tuning for Large Language Models: A Survey: This paper surveys research works in the quickly advancing field of instruction tuning (IT), a crucial technique to enhance the capabilities and controllability of large language models (LLMs). Instruction tuning refers to the process of further training LLMs on a dataset consisting of \textsc{(instruction, output)} pairs in a supervised fashion, which bridges the gap between the next-word prediction objective of LLMs and the users' objective of having LLMs adhere to human instructions. In this work, we make a systematic review of the literature, including the general methodology of IT, the construction of IT datasets, the training of IT models, and applications to different modalities, domains and applications, along with an analysis on aspects that influence the outcome of IT (e.g., generation of instruction outputs, size of the instruction dataset, etc). We also review the potential pitfalls of IT along with criticism against it, along with efforts pointing out current deficiencies of existing strategies and suggest some avenues for fruitful research. Project page: github.com/xiaoya-li/Instruction-Tuning-Survey) <|cite_end|>.
In addition, some instruction-following resources are available for specific scientific and medical tasks, such as describing the properties of a molecule <|cite_start|> (Reference: Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models: Large Language Models (LLMs), with their remarkable task-handling capabilities and innovative outputs, have catalyzed significant advancements across a spectrum of fields. However, their proficiency within specialized domains such as biomolecular studies remains limited. To address this challenge, we introduce Mol-Instructions, a comprehensive instruction dataset designed for the biomolecular domain. Mol-Instructions encompasses three key components: molecule-oriented instructions, protein-oriented instructions, and biomolecular text instructions. Each component aims to improve the understanding and prediction capabilities of LLMs concerning biomolecular features and behaviors. Through extensive instruction tuning experiments on LLMs, we demonstrate the effectiveness of Mol-Instructions in enhancing large models' performance in the intricate realm of biomolecular studies, thus fostering progress in the biomolecular research community. Mol-Instructions is publicly available for ongoing research and will undergo regular updates to enhance its applicability.) <|cite_end|> <|cite_start|> (Reference: LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset: Chemistry plays a crucial role in many domains, such as drug discovery and material science. While large language models (LLMs) such as GPT-4 exhibit remarkable capabilities on natural language processing tasks, existing research indicates that their performance on chemistry tasks is discouragingly low. In this paper, however, we demonstrate that our developed LLMs can achieve very strong results on a comprehensive set of chemistry tasks, outperforming the most advanced GPT-4 and Claude 3 Opus by a substantial margin. To accomplish this, we propose SMolInstruct, a large-scale, comprehensive, and high-quality dataset for instruction tuning. It contains 14 selected chemistry tasks and over three million samples, laying a solid foundation for training and evaluating LLMs for chemistry. Using SMolInstruct, we fine-tune a set of open-source LLMs, among which, we find that Mistral serves as the best base model for chemistry tasks. Our analysis further demonstrates the critical role of the proposed dataset in driving the performance improvements.) <|cite_end|>or answering medical exam questions <|cite_start|> (Reference: Clinical Camel: An Open-Source Expert-Level Medical Language Model
with Dialogue-Based Knowledge Encoding: Large Language Models (LLMs) present immense potential in the medical field, yet concerns over data privacy, regulatory compliance, and model stability restrict their widespread adoption. Although the distillation of high-performing closed-source LLMs has proven effective for general tasks, their application in healthcare is limited due to reduced domain knowledge and remnants of alignment behavior hindering clinical tasks. To address these challenges, we propose Dialogue-Based Knowledge Encoding (DBKE). DBKE enhances models’ implicit knowledge base and primes them for conversational recall, augmenting their conversational capabilities and enabling a soft alignment for subsequent use cases. By transforming dense academic source text into synthetic dialogue, DBKE broadens the model’s knowledge base and enables a soft alignment that guides downstream behaviours. We present Clinical Camel, an open-source, healthcare-focused conversational model, to showcase the effectiveness of DBKE. Clinical Camel outperforms GPT-3.5 on the United States Medical Licensing Examination (USMLE) Step 1 and Step 3 with scores of 53.2% and 58.2%, respectively, compared to GPT-3.5’s scores of 36.1% and 55.7%. Clinical Camel adeptly handles multi-stage clinical case problems, provides adaptive counseling, and generates) <|cite_end|> <|cite_start|> (Reference: Medalpaca - an open-source collection of medical conversational {AI} models and training data: As large language models (LLMs) like OpenAI's GPT series continue to make strides, we witness the emergence of artificial intelligence applications in an ever-expanding range of fields. In medicine, these LLMs hold considerable promise for improving medical workflows, diagnostics, patient care, and education. Yet, there is an urgent need for open-source models that can be deployed on-premises to safeguard patient privacy. In our work, we present an innovative dataset consisting of over 160,000 entries, specifically crafted to fine-tune LLMs for effective medical applications. We investigate the impact of fine-tuning these datasets on publicly accessible pre-trained LLMs, and subsequently, we juxtapose the performance of pre-trained-only models against the fine-tuned models concerning the examinations that future medical doctors must pass to achieve certification.) <|cite_end|>(see \S\ref{sec:related_work} for a review).
However, there is a scarcity of resources aimed at enabling flexible scientific literature understanding capabilities across a range of domains.
In this work, we present \dataset (\textbf{Sci}entific \textbf{R}esource for \textbf{I}nstruction-\textbf{F}ollowing and \textbf{F}inetuning), a dataset to enable progress on instruction-following over scientific literature. \dataset includes 137K demonstrations for 54 tasks spanning five scientific literature understanding task categories: information extraction, summarization, question answering, claim verification, and classification.
\dataset covers five scientific domains,
ranging from artificial intelligence to clinical medicine (Figure \ref{fig:dataset_overview}).
The tasks in \dataset are derived from existing scientific literature understanding datasets with human-annotated inputs and outputs, and are converted to a common instruction-following format via templates written by the paper authors (Figure \ref{fig:task_taxonomy}).
Many of the tasks feature long input contexts and require structured model responses.
We hold out nine representative tasks from \dataset for use as an evaluation benchmark, which we call \evaldataset (\S\ref{subsec:eval}). We then perform supervised finetuning experiments to identify a sample-efficient strategy to adapt \tulu V2 <|cite_start|> (Reference: Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2: Since the release of T\"ULU [Wang et al., 2023b], open resources for instruction tuning have developed quickly, from better base models to new finetuning techniques. We test and incorporate a number of these advances into T\"ULU, resulting in T\"ULU 2, a suite of improved T\"ULU models for advancing the understanding and best practices of adapting pretrained language models to downstream tasks and user preferences. Concretely, we release: (1) T\"ULU-V2-mix, an improved collection of high-quality instruction datasets; (2) T\"ULU 2, LLAMA-2 models finetuned on the V2 mixture; (3) T\"ULU 2+DPO, T\"ULU 2 models trained with direct preference optimization (DPO), including the largest DPO-trained model to date (T\"ULU 2+DPO 70B); (4) CODE T\"ULU 2, CODE LLAMA models finetuned on our V2 mix that outperform CODE LLAMA and its instruction-tuned variant, CODE LLAMA-Instruct. Our evaluation from multiple perspectives shows that the T\"ULU 2 suite achieves state-of-the-art performance among open models and matches or exceeds the performance of GPT-3.5-turbo-0301 on several benchmarks. We release all the checkpoints, data, training and evaluation code to facilitate future open efforts on adapting large language models.) <|cite_end|>---a strong open instruction-following model---for scientific literature use cases.
We find that, by starting from the original \tulu V2 model and performing additional finetuning on a downsampled mix of \dataset and data from the \tulumix, we are able to match the performance achieved by training on all instances, while using less than 20\% of the available data.
Using this sample-efficient training strategy, we improve performance on \evaldataset by 28.1\% over a directly comparable baseline at 7B scale, and by 6.5\% at 70B scale. At the same time, we achieve performance within 2\% of the baseline model on a general instruction-following benchmark (\S\ref{subsec:main_results}).
We publicly release our 7B and 70B models, which we call \ourmodel.
In summary, our contributions are as follows:
\vspace{-2pt}
\begin{itemize}[left=0pt,itemsep=1pt]
\item We introduce \dataset, a dataset with 137K instruction-following demonstrations covering 54 literature understanding tasks spanning five scientific domains. Many tasks in \dataset feature long input contexts and require structured model responses.
\item We employ a sample-efficient approach to adapt a family of general instruction-following models to scientific literature use cases. The resulting \ourmodel models achieve substantial performance gains on held-out scientific tasks, without sacrificing general capabilities.
\item We release the \dataset dataset, \ourmodel model checkpoints, and code to recreate the dataset and perform evaluations on nine held-out tasks from \dataset.
\end{itemize}
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\textwidth]{figures/main.pdf}
\caption{
Example \dataset tasks. Given an input context from a research paper, the \styledtext{oc-black}{oc-gray-2}{\ \texttt{text prompt}\ } instructs an LLM to perform an operation on the input---e.g. determine whether the \styledtext{oc-gray-0}{oc-blue-8}{\ \texttt{abstract}\ } entails a scientific \styledtext{oc-gray-0}{oc-maroon}{\ \texttt{claim}\ }, extract information over the \styledtext{oc-gray-0}{oc-orange-8}{\ \texttt{full\_text}\ }, answer a question, etc. The model's \styledtext{oc-black}{oc-gray-2}{\ \texttt{output}\ } must conform to a task-specific, user-specified \styledtext{oc-black}{oc-gray-2}{\ \texttt{structure}\ }. \dataset unifies 54 scientific literature understanding tasks under a common input / output format, enabling the development of LLMs that can flexibly generalize to novel scientific use cases.
}
\label{fig:task_taxonomy}
\end{figure}
Related Work
\label{sec:related_work}
\paragraph{Strategies for creation of instruction-following resources.}
Instruction tuning, or finetuning LLMs to improve their instruction-following ability, has emerged as a crucial technique for enhancing generalizability and controllability of LLMs <|cite_start|> (Reference: Finetuned Language Models Are Zero-Shot Learners: This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction-tune it on over 60 NLP tasks verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 tasks that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning.) <|cite_end|> <|cite_start|> (Reference: Multitask Prompted Training Enables Zero-Shot Task Generalization: Large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks (Brown et al., 2020). It has been hypothesized that this is a consequence of implicit multitask learning in language models' pretraining (Radford et al., 2019). Can zero-shot generalization instead be directly induced by explicit multitask learning? To test this question at scale, we develop a system for easily mapping any natural language tasks into a human-readable prompted form. We convert a large set of supervised datasets, each with multiple prompts with diverse wording. These prompted datasets allow for benchmarking the ability of a model to perform completely held-out tasks. We fine-tune a pretrained encoder-decoder model (Raffel et al., 2020; Lester et al., 2021) on this multitask mixture covering a wide variety of tasks. The model attains strong zero-shot performance on several standard datasets, often outperforming models up to 16x its size. Further, our approach attains strong performance on a subset of tasks from the BIG-bench benchmark, outperforming models up to 6x its size. All trained models are available at https://github.com/bigscience-workshop/t-zero and all prompts are available at https://github.com/bigscience-workshop/promptsource.) <|cite_end|> <|cite_start|> (Reference: Cross-Task Generalization via Natural Language Crowdsourcing Instructions: Humans (e.g., crowdworkers) have a remarkable ability in solving different tasks, by simply reading textual instructions that define them and looking at a few examples. Despite the success of the conventional supervised learning on individual datasets, such models often struggle with generalization across tasks (e.g., a question-answering system cannot solve classification tasks). A long-standing challenge in AI is to build a model that learns a new task by understanding the human-readable instructions that define it. To study this, we introduce NATURAL INSTRUCTIONS, a dataset of 61 distinct tasks, their human-authored instructions, and 193k task instances (input-output pairs). The instructions are obtained from crowdsourcing instructions used to create existing NLP datasets and mapped to a unified schema. Using this meta-dataset, we measure cross-task generalization by training models on seen tasks and measuring generalization to the remaining unseen ones. We adopt generative pre-trained language models to encode task-specific instructions along with input and generate task output. Our results indicate that models benefit from instructions when evaluated in terms of generalization to unseen tasks (19% better for models utilizing instructions). These models, however, are far behind an estimated performance upperbound indicating significant room for more progress in this direction.) <|cite_end|> <|cite_start|> (Reference: Camels in a Changing Climate: Enhancing LM Adaptation with Tulu 2: Since the release of T\"ULU [Wang et al., 2023b], open resources for instruction tuning have developed quickly, from better base models to new finetuning techniques. We test and incorporate a number of these advances into T\"ULU, resulting in T\"ULU 2, a suite of improved T\"ULU models for advancing the understanding and best practices of adapting pretrained language models to downstream tasks and user preferences. Concretely, we release: (1) T\"ULU-V2-mix, an improved collection of high-quality instruction datasets; (2) T\"ULU 2, LLAMA-2 models finetuned on the V2 mixture; (3) T\"ULU 2+DPO, T\"ULU 2 models trained with direct preference optimization (DPO), including the largest DPO-trained model to date (T\"ULU 2+DPO 70B); (4) CODE T\"ULU 2, CODE LLAMA models finetuned on our V2 mix that outperform CODE LLAMA and its instruction-tuned variant, CODE LLAMA-Instruct. Our evaluation from multiple perspectives shows that the T\"ULU 2 suite achieves state-of-the-art performance among open models and matches or exceeds the performance of GPT-3.5-turbo-0301 on several benchmarks. We release all the checkpoints, data, training and evaluation code to facilitate future open efforts on adapting large language models.) <|cite_end|>. Several strategies have been explored for the creation of instruction-following resources, such as repurposing existing datasets using human-written instruction templates <|cite_start|> (Reference: Finetuned Language Models Are Zero-Shot Learners: This paper explores a simple method for improving the zero-shot learning abilities of language models. We show that instruction tuning -- finetuning language models on a collection of tasks described via instructions -- substantially improves zero-shot performance on unseen tasks. We take a 137B parameter pretrained language model and instruction-tune it on over 60 NLP tasks verbalized via natural language instruction templates. We evaluate this instruction-tuned model, which we call FLAN, on unseen task types. FLAN substantially improves the performance of its unmodified counterpart and surpasses zero-shot 175B GPT-3 on 20 of 25 tasks that we evaluate. FLAN even outperforms few-shot GPT-3 by a large margin on ANLI, RTE, BoolQ, AI2-ARC, OpenbookQA, and StoryCloze. Ablation studies reveal that number of finetuning datasets, model scale, and natural language instructions are key to the success of instruction tuning.) <|cite_end|> <|cite_start|> (Reference: Scaling Instruction-Finetuned Language Models: Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.) <|cite_end|> <|cite_start|> (Reference: Multitask Prompted Training Enables Zero-Shot Task Generalization: Large language models have recently been shown to attain reasonable zero-shot generalization on a diverse set of tasks (Brown et al., 2020). It has been hypothesized that this is a consequence of implicit multitask learning in language models' pretraining (Radford et al., 2019). Can zero-shot generalization instead be directly induced by explicit multitask learning? To test this question at scale, we develop a system for easily mapping any natural language tasks into a human-readable prompted form. We convert a large set of supervised datasets, each with multiple prompts with diverse wording. These prompted datasets allow for benchmarking the ability of a model to perform completely held-out tasks. We fine-tune a pretrained encoder-decoder model (Raffel et al., 2020; Lester et al., 2021) on this multitask mixture covering a wide variety of tasks. The model attains strong zero-shot performance on several standard datasets, often outperforming models up to 16x its size. Further, our approach attains strong performance on a subset of tasks from the BIG-bench benchmark, outperforming models up to 6x its size. All trained models are available at https://github.com/bigscience-workshop/t-zero and all prompts are available at https://github.com/bigscience-workshop/promptsource.) <|cite_end|> <|cite_start|> (Reference: Cross-Task Generalization via Natural Language Crowdsourcing Instructions: Humans (e.g., crowdworkers) have a remarkable ability in solving different tasks, by simply reading textual instructions that define them and looking at a few examples. Despite the success of the conventional supervised learning on individual datasets, such models often struggle with generalization across tasks (e.g., a question-answering system cannot solve classification tasks). A long-standing challenge in AI is to build a model that learns a new task by understanding the human-readable instructions that define it. To study this, we introduce NATURAL INSTRUCTIONS, a dataset of 61 distinct tasks, their human-authored instructions, and 193k task instances (input-output pairs). The instructions are obtained from crowdsourcing instructions used to create existing NLP datasets and mapped to a unified schema. Using this meta-dataset, we measure cross-task generalization by training models on seen tasks and measuring generalization to the remaining unseen ones. We adopt generative pre-trained language models to encode task-specific instructions along with input and generate task output. Our results indicate that models benefit from instructions when evaluated in terms of generalization to unseen tasks (19% better for models utilizing instructions). These models, however, are far behind an estimated performance upperbound indicating significant room for more progress in this direction.) <|cite_end|>, crowdsourcing instructions [ <|cite_start|> (Reference: LIMA: Less Is More for Alignment: Large language models are trained in two stages: (1) unsupervised pretraining from raw text, to learn general-purpose representations, and (2) large scale instruction tuning and reinforcement learning, to better align to end tasks and user preferences. We measure the relative importance of these two stages by training LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses, without any reinforcement learning or human preference modeling. LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples in the training data, including complex queries that range from planning trip itineraries to speculating about alternate history. Moreover, the model tends to generalize well to unseen tasks that did not appear in the training data. In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases; this statistic is as high as 58% when compared to Bard and 65% versus DaVinci003, which was trained with human feedback. Taken together, these results strongly suggest that almost all knowledge in large language models is learned during pretraining, and only limited instruction tuning data is necessary to teach models to produce high quality output.) <|cite_end|>, ShareGPT\footnote{\url{https://sharegpt.com/}}] and using LLMs to generate synthetic instructions data. As LLM capabilities grow stronger, synthetic instruction generation approaches, often including humans in the loop as correctors, have shown promising results. Broadly, these approaches use LLMs to either generate new dataset/task instances alongside instructions <|cite_start|> (Reference: Self-Instruct: Aligning Language Models with Self-Generated Instructions: Large "instruction-tuned" language models (i.e., finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they depend heavily on human-written instruction data that is often limited in quantity, diversity, and creativity, therefore hindering the generality of the tuned model. We introduce Self-Instruct, a framework for improving the instruction-following capabilities of pretrained language models by bootstrapping off their own generations. Our pipeline generates instructions, input, and output samples from a language model, then filters invalid or similar ones before using them to finetune the original model. Applying our method to the vanilla GPT3, we demonstrate a 33% absolute improvement over the original model on Super-NaturalInstructions, on par with the performance of InstructGPT-001, which was trained with private user data and human annotations. For further evaluation, we curate a set of expert-written instructions for novel tasks, and show through human evaluation that tuning GPT3 with Self-Instruct outperforms using existing public instruction datasets by a large margin, leaving only a 5% absolute gap behind InstructGPT-001. Self-Instruct provides an almost annotation-free method for aligning pre-trained language models with instructions, and we release our large synthetic dataset to facilitate future studies on instruction tuning. Our code and data are available at https://github.com/yizhongw/self-instruct.) <|cite_end|> <|cite_start|> (Reference: Learning to Generate Instruction Tuning Datasets for Zero-Shot Task Adaptation: We introduce Bonito, an open-source model for conditional task generation that converts unannotated text into task-specific training datasets for instruction tuning. We aim to enable zero-shot task adaptation of large language models on users' specialized, private data. We train Bonito by fine-tuning a pretrained large language model on a new large-scale dataset with 1.65M examples created by remixing existing instruction tuning datasets into meta-templates. The meta-templates for a dataset produce training examples where the input is the unannotated text and the task attribute and the output consists of the instruction and the response. We use Bonito to generate synthetic tasks for seven datasets from specialized domains with unannotated text across three task types -- yes-no question answering, extractive question answering, and natural language inference -- and adapt language models. We show that Bonito significantly improves the average performance of pretrained and instruction tuned models over the de facto self supervised baseline. For example, adapting Mistral-Instruct-v2 and instruction tuned variants of Mistral and Llama2 with Bonito improves the strong zero-shot performance by 22.1 F1 points whereas the next word prediction objective undoes some of the benefits of instruction tuning and reduces the average performance by 0.8 F1 points. We conduct additional experiments with Bonito to understand the effects of the domain, the size of the training set, and the choice of alternative synthetic task generators. Overall, we show that learning with synthetic instruction tuning datasets is an effective way to adapt language models to new domains. The model, dataset, and code are available at https://github.com/BatsResearch/bonito.) <|cite_end|>, or to ``back-translate'' existing datasets into instructions <|cite_start|> (Reference: Dynosaur: A Dynamic Growth Paradigm for Instruction-Tuning Data Curation: Instruction tuning has emerged to enhance the capabilities of large language models (LLMs) to comprehend instructions and generate appropriate responses. Existing methods either manually annotate or employ LLM (e.g., GPT-series) to generate data for instruction tuning. However, they often overlook associating instructions with existing annotated datasets. In this paper, we propose Dynosaur, a dynamic growth paradigm for the automatic curation of instruction-tuning data. Based on the metadata of existing datasets, we use LLMs to automatically construct instruction-tuning data by identifying relevant data fields and generating appropriate instructions. By leveraging the existing annotated datasets, Dynosaur offers several advantages: 1) it reduces the API cost for generating instructions (e.g., it costs less than $12 USD by calling GPT-3.5-turbo for generating 800K instruction tuning samples; 2) it provides high-quality data for instruction tuning (e.g., it performs better than Alpaca and Flan on Super-NI and Longform with comparable data sizes); and 3) it supports the continuous improvement of models by generating instruction-tuning data when a new annotated dataset becomes available. We further investigate a continual learning scheme for learning with the ever-growing instruction-tuning dataset, and demonstrate that replaying tasks with diverse instruction embeddings not only helps mitigate forgetting issues but generalizes to unseen tasks better. Code and data are available at https://github.com/WadeYin9712/Dynosaur.) <|cite_end|> <|cite_start|> (Reference: LongForm: Effective Instruction Tuning with Reverse Instructions: Instruction tuning enables language models to more effectively generalize and better follow user intent. However, obtaining instruction data is costly and challenging. Prior work employs methods such as expensive human annotation, crowd-sourced datasets with alignment issues, and generating noisy examples via LLMs. We introduce the LongForm-C dataset, which is created by reverse instructions. We generate instructions via LLMs for human-written corpus examples using reverse instructions. First we select a diverse set of human-written documents from corpora such as C4 and Wikipedia; then we generate instructions for these documents via LLMs. This approach provides a cheaper and cleaner instruction-tuning dataset with natural output and one suitable for long text generation. Our models outperform 10x larger language models without instruction tuning on tasks such as story/recipe generation and long-form question answering. Moreover, LongForm models outperform prior instruction-tuned models such as FLAN-T5 and Alpaca by a large margin, and improve language understanding capabilities further. We publicly release our data and models: https://github.com/akoksal/LongForm.) <|cite_end|> <|cite_start|> (Reference: Self-Alignment with Instruction Backtranslation: We present a scalable method to build a high quality instruction following language model by automatically labelling human-written text with corresponding instructions. Our approach, named instruction backtranslation, starts with a language model finetuned on a small amount of seed data, and a given web corpus. The seed model is used to construct training examples by generating instruction prompts for web documents (self-augmentation), and then selecting high quality examples from among these candidates (self-curation). This data is then used to finetune a stronger model. Finetuning LLaMa on two iterations of our approach yields a model that outperforms all other LLaMa-based models on the Alpaca leaderboard not relying on distillation data, demonstrating highly effective self-alignment.) <|cite_end|>. In this work, we create instructions using human-written templates (\S \ref{subsec:dataset_construction}).
\paragraph{Instruction-following resources for scientific literature.} While numerous open-domain instruction-following collections exist, resources for enhancing and evaluating LLMs' instruction-following capabilities on scientific literature are limited. Such resources are critical for developing models that can assist researchers and accelerate scientific discovery <|cite_start|> (Reference: Galactica: A Large Language Model for Science: Information overload is a major obstacle to scientific progress. The explosive growth in scientific literature and data has made it ever harder to discover useful insights in a large mass of information. Today scientific knowledge is accessed through search engines, but they are unable to organize scientific knowledge alone. In this paper we introduce Galactica: a large language model that can store, combine and reason about scientific knowledge. We train on a large scientific corpus of papers, reference material, knowledge bases and many other sources. We outperform existing models on a range of scientific tasks. On technical knowledge probes such as LaTeX equations, Galactica outperforms the latest GPT-3 by 68.2% versus 49.0%. Galactica also performs well on reasoning, outperforming Chinchilla on mathematical MMLU by 41.3% to 35.7%, and PaLM 540B on MATH with a score of 20.4% versus 8.8%. It also sets a new state-of-the-art on downstream tasks such as PubMedQA and MedMCQA dev of 77.6% and 52.9%. And despite not being trained on a general corpus, Galactica outperforms BLOOM and OPT-175B on BIG-bench. We believe these results demonstrate the potential for language models as a new interface for science. We open source the model for the benefit of the scientific community.) <|cite_end|> <|cite_start|> (Reference: DARWIN Series: Domain Specific Large Language Models for Natural Science: Emerging tools bring forth fresh approaches to work, and the field of natural science is no different. In natural science, traditional manual, serial, and labour-intensive work is being augmented by automated, parallel, and iterative processes driven by artificial intelligence-based experimental automation and more. To add new capabilities in natural science, enabling the acceleration and enrichment of automation of the discovery process, we present DARWIN, a series of tailored LLMs for natural science, mainly in physics, chemistry, and material science. This series relies on open-source LLM, incorporating structured and unstructured scientific knowledge from public datasets and literature. We fine-tuned the models using over 60,000 instruction data points, emphasizing factual correctness. During the fine-tuning, we introduce the Scientific Instruction Generation (SIG) model, automating instruction generation from scientific texts. This eliminates the need for manual extraction or domain-specific knowledge graphs and efficiently injects scientific knowledge into the model. We also explore multi-task training strategies, revealing interconnections between scientific tasks. DARWIN series not only achieves state-of-the-art results on various scientific tasks but also diminishes reliance on closed-source AI models. Our research showcases the ability of LLM in the scientific domain, with the overarching goal of fostering prosperity within the broader AI for science community.) <|cite_end|>. Recent work has taken steps in this direction with the development of instruction-following datasets for specific domains such as mathematics <|cite_start|> (Reference: {Ma: 산, 염기 및 결로 방지 처리한 포장재가 '후지' 사과의 저장기간 동안 품질유지에 미치는 영향을 확인하기 위하여 국내산 제올라이트 분말소재를 산, 염기 처리하여 가공한 두께 30 ${\mu}m$ 의 필름에 수분응축 처리한 후 마스터뱃치 하여 생산한 개발포장구(FMA)에 후지사과를 20개씩 포장하여 $15^{\circ}C$ , 상대습도 67%의 항온항습실에 150일간 저장하였다. 저장 50일, 90일, 120일 및 150일에 기호도, 중량, Vitamin C, 당도, 산도, 기체조성을 비교 분석하였다. 품질유지가 가능한 기간은 대조구 L이 130일, 개발 포장구 NA는 170일로 나타나 40일 정도 품질유지기간이 대조구보다 길게 나타남을 확인했다. 저장 150일 후 중량 감소는 대조구 L은 1% 내외였고 개발 포장구 NA는 1.5%의 중량감소가 일어났다. Vitamin C 함량은 초기 5.24 mg/100g F.W.에서 저장 150일후 대조구 L은 2.09 mg/100g F.W였고 개발 포장구 NA는 2.72 mg/100g F.W.로 개발포장구 NA가 대조구 L보다 Vitamin C 잔존량이 높게 나타났다. 당도는 저장 150일 후 비슷하게 유지되었으며 산도는 개발포장구에서 더 높게 유지되고 있었으며 에틸렌가스의 경우 대조구 L은 192.2 ppm였고 포장구 NA은 141.4 ppm으로 조사되었다. 또한 이상의 결과에서 본 실험에서 개발포장구의 품질유지기간이 대조구보다 길게 나타난 것은 개발필름의 포장재내에 $O_2$ 농도는 대조구 L보다 높고 $CO_2$ 농도는 낮지만 에틸렌가스가 낮게 유지됨으로서 선도가 잘 유지되고 있다는 결론을 내릴 수 있다. 따라서 개발한 MA 저장용필름을 사과의 선도유지용 포장재로 활용할 수 있음을 확인했다. 반면 수분응축 현상이 생기지 않도록 수분응축억제 처리한 포장구에서 사과의 상품성에는 도움이 될 수도 있으나 신선도 유지에 큰 영향은 없는 것으로 보여진다. 【To investigate the effects of functional MA films (FMA) masterbatched by zeolite powder treated with 1 N HCl, 0.5 N NaCl solution and anti-fogging agent (NA) compared with control on the freshness extension of 'Fuji' apples during storage at $15^{\circ}C$ . Preference, weight loss, total ascorbic acid, sugar content, acidity, change of gas composition in package were measured. After 150 days of storage, the weight loss of control (L) apples was 1%, that of apple in FMA film (NA) was 1.5% after 150 days. Total ascorbic acid content of apples in control (L) after 150 days was 2.09 mg%, those of apple in FMA film (NA) was 2.72 mg%. The titratable acidity of apple in FMA film was higher than that in control, while soluble solids content of apples in FMA film was lower than that in control after 150 days. Ethylene gas adsorbability in control was 192.2 ppm and those in FMA film was 141.4 ppm after 150 days. Overall, apples in FMA film was better than that of control. It was verified that apples packed with LLDPE film (control) lasted about 130days, while those with FMA films lasted about 170 days. Also, FMA films treated with anti-fogging agent few different compare to nontreated film, but commerdity on the display in market was considered higher than that of non-treated anti-fogging agent.】) <|cite_end|> <|cite_start|> (Reference: MAmmoTH2: Scaling Instructions from the Web: Instruction tuning improves the reasoning abilities of large language models (LLMs), with data quality and scalability being the crucial factors. Most instruction tuning data come from human crowd-sourcing or GPT-4 distillation. We propose a paradigm to efficiently harvest 10 million naturally existing instruction data from the pre-training web corpus to enhance LLM reasoning. Our approach involves (1) recalling relevant documents, (2) extracting instruction-response pairs, and (3) refining the extracted pairs using open-source LLMs. Fine-tuning base LLMs on this dataset, we build MAmmoTH2 models, which significantly boost performance on reasoning benchmarks. Notably, MAmmoTH2-7B's (Mistral) performance increases from 11% to 36.7% on MATH and from 36% to 68.4% on GSM8K without training on any in-domain data. Further training MAmmoTH2 on public instruction tuning datasets yields MAmmoTH2-Plus, achieving state-of-the-art performance on several reasoning and chatbot benchmarks. Our work demonstrates how to harvest large-scale, high-quality instruction data without costly human annotation or GPT-4 distillation, providing a new paradigm for building better instruction tuning data.) <|cite_end|> <|cite_start|> (Reference: DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models: Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature. In this paper, we introduce DeepSeekMath 7B, which continues pre-training DeepSeek-Coder-Base-v1.5 7B with 120B math-related tokens sourced from Common Crawl, together with natural language and code data. DeepSeekMath 7B has achieved an impressive score of 51.7% on the competition-level MATH benchmark without relying on external toolkits and voting techniques, approaching the performance level of Gemini-Ultra and GPT-4. Self-consistency over 64 samples from DeepSeekMath 7B achieves 60.9% on MATH. The mathematical reasoning capability of DeepSeekMath is attributed to two key factors: First, we harness the significant potential of publicly available web data through a meticulously engineered data selection pipeline. Second, we introduce Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO), that enhances mathematical reasoning abilities while concurrently optimizing the memory usage of PPO.) <|cite_end|> <|cite_start|> (Reference: WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct: Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.) <|cite_end|> <|cite_start|> (Reference: MathScale: Scaling Instruction Tuning for Mathematical Reasoning: Large language models (LLMs) have demonstrated remarkable capabilities in problem-solving. However, their proficiency in solving mathematical problems remains inadequate. We propose MathScale, a simple and scalable method to create high-quality mathematical reasoning data using frontier LLMs (e.g., {\tt GPT-3.5}). Inspired by the cognitive mechanism in human mathematical learning, it first extracts topics and knowledge points from seed math questions and then build a concept graph, which is subsequently used to generate new math questions. MathScale exhibits effective scalability along the size axis of the math dataset that we generate. As a result, we create a mathematical reasoning dataset (MathScaleQA) containing two million math question-answer pairs. To evaluate mathematical reasoning abilities of LLMs comprehensively, we construct {\sc MwpBench}, a benchmark of Math Word Problems, which is a collection of ten datasets (including GSM8K and MATH) covering K-12, college, and competition level math problems. We apply MathScaleQA to fine-tune open-source LLMs (e.g., LLaMA-2 and Mistral), resulting in significantly improved capabilities in mathematical reasoning. Evaluated on {\sc MwpBench}, MathScale-7B achieves state-of-the-art performance across all datasets, surpassing its best peers of equivalent size by 42.9\% in micro average accuracy and 43.7\% in macro average accuracy, respectively.) <|cite_end|> <|cite_start|> (Reference: OpenMathInstruct-1: A 1.8 Million Math Instruction Tuning Dataset: Recent work has shown the immense potential of synthetically generated datasets for training large language models (LLMs), especially for acquiring targeted skills. Current large-scale math instruction tuning datasets such as MetaMathQA (Yu et al., 2024) and MAmmoTH (Yue et al., 2024) are constructed using outputs from closed-source LLMs with commercially restrictive licenses. A key reason limiting the use of open-source LLMs in these data generation pipelines has been the wide gap between the mathematical skills of the best closed-source LLMs, such as GPT-4, and the best open-source LLMs. Building on the recent progress in open-source LLMs, our proposed prompting novelty, and some brute-force scaling, we construct OpenMathInstruct-1, a math instruction tuning dataset with 1.8M problem-solution pairs. The dataset is constructed by synthesizing code-interpreter solutions for GSM8K and MATH, two popular math reasoning benchmarks, using the recently released and permissively licensed Mixtral model. Our best model, OpenMath-CodeLlama-70B, trained on a subset of OpenMathInstruct-1, achieves a score of 84.6% on GSM8K and 50.7% on MATH, which is competitive with the best gpt-distilled models. We release our code, models, and the OpenMathInstruct-1 dataset under a commercially permissive license.) <|cite_end|>, medicine <|cite_start|> (Reference: An Error-Based Measure for Concept Drift Detection and Characterization: ) <|cite_end|> <|cite_start|> (Reference: PMC-LLaMA: toward building open-source language models for medicine: OBJECTIVE
Recently, large language models (LLMs) have showcased remarkable capabilities in natural language understanding. While demonstrating proficiency in everyday conversations and question-answering (QA) situations, these models frequently struggle in domains that require precision, such as medical applications, due to their lack of domain-specific knowledge. In this article, we describe the procedure for building a powerful, open-source language model specifically designed for medicine applications, termed as PMC-LLaMA.
MATERIALS AND METHODS
We adapt a general-purpose LLM toward the medical domain, involving data-centric knowledge injection through the integration of 4.8M biomedical academic papers and 30K medical textbooks, as well as comprehensive domain-specific instruction fine-tuning, encompassing medical QA, rationale for reasoning, and conversational dialogues with 202M tokens.
RESULTS
While evaluating various public medical QA benchmarks and manual rating, our lightweight PMC-LLaMA, which consists of only 13B parameters, exhibits superior performance, even surpassing ChatGPT. All models, codes, and datasets for instruction tuning will be released to the research community.
DISCUSSION
Our contributions are 3-fold: (1) we build up an open-source LLM toward the medical domain. We believe the proposed PMC-LLaMA model can promote further development of foundation models in medicine, serving as a medical trainable basic generative language backbone; (2) we conduct thorough ablation studies to demonstrate the effectiveness of each proposed component, demonstrating how different training data and model scales affect medical LLMs; (3) we contribute a large-scale, comprehensive dataset for instruction tuning.
CONCLUSION
In this article, we systematically investigate the process of building up an open-source medical-specific LLM, PMC-LLaMA.) <|cite_end|> <|cite_start|> (Reference: Exploring the Effectiveness of Instruction Tuning in Biomedical Language Processing: Large Language Models (LLMs), particularly those similar to ChatGPT, have significantly influenced the field of Natural Language Processing (NLP). While these models excel in general language tasks, their performance in domain-specific downstream tasks such as biomedical and clinical Named Entity Recognition (NER), Relation Extraction (RE), and Medical Natural Language Inference (NLI) is still evolving. In this context, our study investigates the potential of instruction tuning for biomedical language processing, applying this technique to two general LLMs of substantial scale. We present a comprehensive, instruction-based model trained on a dataset that consists of approximately $200,000$ instruction-focused samples. This dataset represents a carefully curated compilation of existing data, meticulously adapted and reformatted to align with the specific requirements of our instruction-based tasks. This initiative represents an important step in utilising such models to achieve results on par with specialised encoder-only models like BioBERT and BioClinicalBERT for various classical biomedical NLP tasks. Our work includes an analysis of the dataset's composition and its impact on model performance, providing insights into the intricacies of instruction tuning. By sharing our codes, models, and the distinctively assembled instruction-based dataset, we seek to encourage ongoing research and development in this area.) <|cite_end|>, chemistry <|cite_start|> (Reference: LlaSMol: Advancing Large Language Models for Chemistry with a Large-Scale, Comprehensive, High-Quality Instruction Tuning Dataset: Chemistry plays a crucial role in many domains, such as drug discovery and material science. While large language models (LLMs) such as GPT-4 exhibit remarkable capabilities on natural language processing tasks, existing research indicates that their performance on chemistry tasks is discouragingly low. In this paper, however, we demonstrate that our developed LLMs can achieve very strong results on a comprehensive set of chemistry tasks, outperforming the most advanced GPT-4 and Claude 3 Opus by a substantial margin. To accomplish this, we propose SMolInstruct, a large-scale, comprehensive, and high-quality dataset for instruction tuning. It contains 14 selected chemistry tasks and over three million samples, laying a solid foundation for training and evaluating LLMs for chemistry. Using SMolInstruct, we fine-tune a set of open-source LLMs, among which, we find that Mistral serves as the best base model for chemistry tasks. Our analysis further demonstrates the critical role of the proposed dataset in driving the performance improvements.) <|cite_end|> <|cite_start|> (Reference: ChemLLM: A Chemical Large Language Model: Large language models (LLMs) have made impressive progress in chemistry applications. However, the community lacks an LLM specifically designed for chemistry. The main challenges are two-fold: firstly, most chemical data and scientific knowledge are stored in structured databases, which limits the model's ability to sustain coherent dialogue when used directly. Secondly, there is an absence of objective and fair benchmark that encompass most chemistry tasks. Here, we introduce ChemLLM, a comprehensive framework that features the first LLM dedicated to chemistry. It also includes ChemData, a dataset specifically designed for instruction tuning, and ChemBench, a robust benchmark covering nine essential chemistry tasks. ChemLLM is adept at performing various tasks across chemical disciplines with fluid dialogue interaction. Notably, ChemLLM achieves results comparable to GPT-4 on the core chemical tasks and demonstrates competitive performance with LLMs of similar size in general scenarios. ChemLLM paves a new path for exploration in chemical studies, and our method of incorporating structured chemical knowledge into dialogue systems sets a new standard for developing LLMs in various scientific fields. Codes, Datasets, and Model weights are publicly accessible at https://hf.co/AI4Chem) <|cite_end|>, molecular biology <|cite_start|> (Reference: Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models: Large Language Models (LLMs), with their remarkable task-handling capabilities and innovative outputs, have catalyzed significant advancements across a spectrum of fields. However, their proficiency within specialized domains such as biomolecular studies remains limited. To address this challenge, we introduce Mol-Instructions, a comprehensive instruction dataset designed for the biomolecular domain. Mol-Instructions encompasses three key components: molecule-oriented instructions, protein-oriented instructions, and biomolecular text instructions. Each component aims to improve the understanding and prediction capabilities of LLMs concerning biomolecular features and behaviors. Through extensive instruction tuning experiments on LLMs, we demonstrate the effectiveness of Mol-Instructions in enhancing large models' performance in the intricate realm of biomolecular studies, thus fostering progress in the biomolecular research community. Mol-Instructions is publicly available for ongoing research and will undergo regular updates to enhance its applicability.) <|cite_end|> <|cite_start|> (Reference: BioInstruct: Instruction Tuning of Large Language Models for Biomedical Natural Language Processing: To enhance the performance of large language models (LLMs) in biomedical natural language processing (BioNLP) by introducing a domain-specific instruction dataset and examining its impact when combined with multi-task learning principles. We created the BioInstruct, comprising 25,005 instructions to instruction-tune LLMs(LLaMA 1 & 2, 7B & 13B version). The instructions were created by prompting the GPT-4 language model with three-seed samples randomly drawn from an 80 human curated instructions. We employed Low-Rank Adaptation(LoRA) for parameter-efficient fine-tuning. We then evaluated these instruction-tuned LLMs on several BioNLP tasks, which can be grouped into three major categories: question answering(QA), information extraction(IE), and text generation(GEN). We also examined whether categories(e.g., QA, IE, and generation) of instructions impact model performance. Comparing with LLMs without instruction-tuned, our instruction-tuned LLMs demonstrated marked performance gains: 17.3% in QA, 5.7% in IE, and 96% in Generation tasks. Our 7B-parameter instruction-tuned LLaMA 1 model was competitive or even surpassed other LLMs in the biomedical domain that were also fine-tuned from LLaMA 1 with vast domain-specific data or a variety of tasks. Our results also show that the performance gain is significantly higher when instruction fine-tuning is conducted with closely related tasks. Our findings align with the observations of multi-task learning, suggesting the synergies between two tasks. The BioInstruct dataset serves as a valuable resource and instruction tuned LLMs lead to the best performing BioNLP applications.) <|cite_end|>, materials science <|cite_start|> (Reference: Transport properties of heterostructures composed of Mo(S,Se)$_2$ on \emph{h}-BN: ) <|cite_end|>, and college-level foundational science <|cite_start|> (Reference: SciGLM: Training Scientific Language Models with Self-Reflective Instruction Annotation and Tuning: Large Language Models (LLMs) have shown promise in assisting scientific discovery. However, such applications are currently limited by LLMs' deficiencies in understanding intricate scientific concepts, deriving symbolic equations, and solving advanced numerical calculations. To bridge these gaps, we introduce SciGLM, a suite of scientific language models able to conduct college-level scientific reasoning. Central to our approach is a novel self-reflective instruction annotation framework to address the data scarcity challenge in the science domain. This framework leverages existing LLMs to generate step-by-step reasoning for unlabelled scientific questions, followed by a process of self-reflective critic-and-revise. Applying this framework, we curated SciInstruct, a diverse and high-quality dataset encompassing physics, chemistry, math, and formal proofs. We fine-tuned the ChatGLM family of language models with SciInstruct, enhancing their scientific and mathematical reasoning capabilities. Remarkably, the SciGLM consistently improves both the base model (ChatGLM3-6B-Base) by 4.87% and larger-scale models (32B) by 2.67%, without sacrificing the language understanding capabilities of the base model. This makes SciGLM a suitable foundational model to facilitate diverse scientific discovery tasks. For the benefit of the wider research community, we release SciInstruct, and SciGLM, alongside a self-reflective framework and fine-tuning code at https://github.com/THUDM/SciGLM.) <|cite_end|>. Besides domain limitations, these resources primarily focus on improving LLMs' abilities to solve college-level science problems or reasoning tasks (see also, MMLU <|cite_start|> (Reference: Measuring Massive Multitask Language Understanding: We propose a new test to measure a text model's multitask accuracy. The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more. To attain high accuracy on this test, models must possess extensive world knowledge and problem solving ability. We find that while most recent models have near random-chance accuracy, the very largest GPT-3 model improves over random chance by almost 20 percentage points on average. However, on every one of the 57 tasks, the best models still need substantial improvements before they can reach expert-level accuracy. Models also have lopsided performance and frequently do not know when they are wrong. Worse, they still have near-random accuracy on some socially important subjects such as morality and law. By comprehensively evaluating the breadth and depth of a model's academic and professional understanding, our test can be used to analyze models across many tasks and to identify important shortcomings.) <|cite_end|>, SciEval <|cite_start|> (Reference: SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research: Recently, there has been growing interest in using Large Language Models (LLMs) for scientific research. Numerous benchmarks have been proposed to evaluate the ability of LLMs for scientific research. However, current benchmarks are mostly based on pre-collected objective questions. This design suffers from data leakage problem and lacks the evaluation of subjective Q/A ability. In this paper, we propose SciEval, a comprehensive and multi-disciplinary evaluation benchmark to address these issues. Based on Bloom's taxonomy, SciEval covers four dimensions to systematically evaluate scientific research ability. In particular, we design a "dynamic" subset based on scientific principles to prevent evaluation from potential data leakage. Both objective and subjective questions are included in SciEval. These characteristics make SciEval a more effective benchmark for scientific research ability evaluation of LLMs. Comprehensive experiments on most advanced LLMs show that, although GPT-4 achieves SOTA performance compared to other LLMs, there is still substantial room for improvement, especially for dynamic questions. The data and codes are now publicly available.) <|cite_end|>, TheoremQA <|cite_start|> (Reference: t: 选用岭南黄羽肉鸡初生雏840只,分为5组,在56天的饲养中,在1,3,4,5组日粮中分别添加0,400,800,1000mg/kg补益类中草药提取物'敌克素',在2组日粮中添加饲用金霉素(50mg/kg).结果,'敌克素'显著提高了56日龄黄羽肉鸡生产性能和血清CD4+/CD8+的比例(P<0.05),显著降低血清尿素氮浓度(P<0.05),效果优于金霉素.) <|cite_end|>, SciBench, and GPQA <|cite_start|> (Reference: GPQA: A Graduate-Level Google-Proof Q&A Benchmark: We present GPQA, a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely difficult: experts who have or are pursuing PhDs in the corresponding domains reach 65% accuracy (74% when discounting clear mistakes the experts identified in retrospect), while highly skilled non-expert validators only reach 34% accuracy, despite spending on average over 30 minutes with unrestricted access to the web (i.e., the questions are "Google-proof"). The questions are also difficult for state-of-the-art AI systems, with our strongest GPT-4 based baseline achieving 39% accuracy. If we are to use future AI systems to help us answer very hard questions, for example, when developing new scientific knowledge, we need to develop scalable oversight methods that enable humans to supervise their outputs, which may be difficult even if the supervisors are themselves skilled and knowledgeable. The difficulty of GPQA both for skilled non-experts and frontier AI systems should enable realistic scalable oversight experiments, which we hope can help devise ways for human experts to reliably get truthful information from AI systems that surpass human capabilities.) <|cite_end|>). In contrast, \dataset both covers a broader set of scientific domains and focuses on document-grounded scientific literature understanding tasks that can power real-world scientific use cases. Another distinguishing factor of our work is our inclusion of tasks that require structured outputs, following a uniform JSON output format, besides text-to-text tasks. Some instruction-tuning resources have explored structured output formats <|cite_start|> (Reference: TableLlama: Towards Open Large Generalist Models for Tables: Semi-structured tables are ubiquitous. There has been a variety of tasks that aim to automatically interpret, augment, and query tables. Current methods often require pretraining on tables or special model architecture design, are restricted to specific table types, or have simplifying assumptions about tables and tasks. This paper makes the first step towards developing open-source large language models (LLMs) as generalists for a diversity of table-based tasks. Towards that end, we construct TableInstruct, a new dataset with a variety of realistic tables and tasks, for instruction tuning and evaluating LLMs. We further develop the first open-source generalist model for tables, TableLlama, by fine-tuning Llama 2 (7B) with LongLoRA to address the long context challenge. We experiment under both in-domain setting and out-of-domain setting. On 7 out of 8 in-domain tasks, TableLlama achieves comparable or better performance than the SOTA for each task, despite the latter often has task-specific design. On 6 out-of-domain datasets, it achieves 5-44 absolute point gains compared with the base model, showing that training on TableInstruct enhances the model's generalizability. We open-source our dataset and trained model to boost future work on developing open generalist models for tables.) <|cite_end|> <|cite_start|> (Reference: InstructUIE: Multi-task Instruction Tuning for Unified Information Extraction: Large language models have unlocked strong multi-task capabilities from reading instructive prompts. However, recent studies have shown that existing large models still have difficulty with information extraction tasks. For example, gpt-3.5-turbo achieved an F1 score of 18.22 on the Ontonotes dataset, which is significantly lower than the state-of-the-art performance. In this paper, we propose InstructUIE, a unified information extraction framework based on instruction tuning, which can uniformly model various information extraction tasks and capture the inter-task dependency. To validate the proposed method, we introduce IE INSTRUCTIONS, a benchmark of 32 diverse information extraction datasets in a unified text-to-text format with expert-written instructions. Experimental results demonstrate that our method achieves comparable performance to Bert in supervised settings and significantly outperforms the state-of-the-art and gpt3.5 in zero-shot settings.) <|cite_end|> <|cite_start|> (Reference: Instruct and Extract: Instruction Tuning for On-Demand Information Extraction: Large language models with instruction-following capabilities open the door to a wider group of users. However, when it comes to information extraction - a classic task in natural language processing - most task-specific systems cannot align well with long-tail ad hoc extraction use cases for non-expert users. To address this, we propose a novel paradigm, termed On-Demand Information Extraction, to fulfill the personalized demands of real-world users. Our task aims to follow the instructions to extract the desired content from the associated text and present it in a structured tabular format. The table headers can either be user-specified or inferred contextually by the model. To facilitate research in this emerging area, we present a benchmark named InstructIE, inclusive of both automatically generated training data, as well as the human-annotated test set. Building on InstructIE, we further develop an On-Demand Information Extractor, ODIE. Comprehensive evaluations on our benchmark reveal that ODIE substantially outperforms the existing open-source models of similar size. Our code and dataset are released on https://github.com/yzjiao/On-Demand-IE.) <|cite_end|> <|cite_start|> (Reference: JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning: Instruction tuning has become an essential process for optimizing the performance of large language models (LLMs). However, current text-to-text instruction tuning methods, referred to as TextTuning, exhibit significant limitations in terms of generalization, robustness, and controllability, primarily due to the absence of explicit task structures. In this paper, we introduce JsonTuning, a novel structure-to-structure approach for instruction tuning. By utilizing the versatile and structured format of JSON to represent tasks, JsonTuning enhances generalization by enabling the model to comprehend essential task elements and their interrelations, improves robustness by reducing ambiguity, and increases controllability by providing explicit control over the output. We conduct a comprehensive comparative analysis between JsonTuning and TextTuning using various language models and evaluation benchmarks. Our experimental results demonstrate that JsonTuning consistently outperforms TextTuning across a range of applications, showing marked improvements in performance, robustness, and controllability. By addressing the inherent limitations of TextTuning, JsonTuning reveals significant potential for developing more effective and reliable LLMs capable of managing diverse scenarios.) <|cite_end|>, but not with a focus on scientific literature. Finally, most datasets in \dataset require long input contexts, leading to longer instruction contexts than prior work (see Appendix Table~\ref{tab:context_comparison} for a comparison).
\paragraph{Other scientific literature benchmarks.}
In addition to instruction-following resources, prior works have also developed benchmarks to improve and assess scientific literature understanding. Notable efforts in the biomedical domain include BLUE <|cite_start|> (Reference: Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets: Inspired by the success of the General Language Understanding Evaluation benchmark, we introduce the Biomedical Language Understanding Evaluation (BLUE) benchmark to facilitate research in the development of pre-training language representations in the biomedicine domain. The benchmark consists of five tasks with ten datasets that cover both biomedical and clinical texts with different dataset sizes and difficulties. We also evaluate several baselines based on BERT and ELMo and find that the BERT model pre-trained on PubMed abstracts and MIMIC-III clinical notes achieves the best results. We make the datasets, pre-trained models, and codes publicly available at https://github.com/ncbi-nlp/BLUE_Benchmark.) <|cite_end|>, BLURB <|cite_start|> (Reference: Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing: Pretraining large neural language models, such as BERT, has led to impressive gains on many natural language processing (NLP) tasks. However, most pretraining efforts focus on general domain corpora, such as newswire and Web. A prevailing assumption is that even domain-specific pretraining can benefit by starting from general-domain language models. In this paper, we challenge this assumption by showing that for domains with abundant unlabeled text, such as biomedicine, pretraining language models from scratch results in substantial gains over continual pretraining of general-domain language models. To facilitate this investigation, we compile a comprehensive biomedical NLP benchmark from publicly-available datasets. Our experiments show that domain-specific pretraining serves as a solid foundation for a wide range of biomedical NLP tasks, leading to new state-of-the-art results across the board. Further, in conducting a thorough evaluation of modeling choices, both for pretraining and task-specific fine-tuning, we discover that some common practices are unnecessary with BERT models, such as using complex tagging schemes in named entity recognition (NER). To help accelerate research in biomedical NLP, we have released our state-of-the-art pretrained and task-specific models for the community, and created a leaderboard featuring our BLURB benchmark (short for Biomedical Language Understanding & Reasoning Benchmark) at https://aka.ms/BLURB.) <|cite_end|>, InBoXBART <|cite_start|> (Reference: An Error-Based Measure for Concept Drift Detection and Characterization: ) <|cite_end|>, and BigBio <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|>; \dataset covers a broader set of domains than these resources. Other efforts such as SciRepEval <|cite_start|> (Reference: `{S: 본 논문에서는 ETRI의 0.5 ㎛ MESFET 공정을 이용하여 광대역 MMIC 2단 증폭기를 설계 및 제작하였다. 정합회로에 의한 보상 방법(Compensated matching network)을 응용하여 2단 증폭기에서 첫 번째 단과 두 번째 단의 이득 특성이 서로 보상되도록 설계하여 광대역 특성을 얻을 수 있었으며, 일반적인 광대역 증폭기가 넓은 대역폭과 낮은 이득 및 출력 전력을 갖지만 본 논문에서는 compensated matching network를 이용하여 넓은 대역폭뿐만 아니라 높은 이득 특성을 얻었다. 제작된 광대역 증폭기의 측정결과, 1.1~2.8 GHz의 대역폭을 가졌으며 S_(21) 이득은 11.1±0.3 dB를 얻었다. 전력 특성의 경우 2.4 GHz에서 입력전력이 4 dBm일 때 P1dB는 12.6 dBm을 얻었다.) <|cite_end|>, Galactica | [
"<|reference_start|> The Semantic Reader Project: Augmenting Scholarly Documents through AI-Powered Interactive Reading Interfaces: Scholarly publications are key to the transfer of knowledge from scholars to others. However, research papers are information-dense, and as the volume of the scientific literature grows, the need for new technology to support the reading process grows. In contrast to the process of finding papers, which has been transformed by Internet technology, the experience of reading research papers has changed little in decades. The PDF format for sharing research papers is widely used due to its portability, but it has significant downsides including: static content, poor accessibility for low-vision readers, and difficulty reading on mobile devices. This paper explores the question \"Can recent advances in AI and HCI power intelligent, interactive, and accessible reading interfaces -- even for legacy PDFs?\" We describe the Semantic Reader Project, a collaborative effort across multiple institutions to explore automatic creation of dynamic reading interfaces for research papers. Through this project, we've developed ten research prototype interfaces and conducted usability studies with more than 300 participants and real-world users showing improved reading experiences for scholars. We've also released a production reading interface for research papers that will incorporate the best features as they mature. We structure this paper around challenges scholars and the public face when reading research papers -- Discovery, Efficiency, Comprehension, Synthesis, and Accessibility -- and present an overview of our progress and remaining open challenges. <|reference_end|>",
"<|reference_start|> Mol-Instructions: A Large-Scale Biomolecular Instruction Dataset for Large Language Models: Large Language Models (LLMs), with their remarkable task-handling capabilities and innovative outputs, have catalyzed significant advancements across a spectrum of fields. However, their proficiency within specialized domains such as biomolecular studies remains limited. To address this challenge, we introduce Mol-Instructions, a comprehensive instruction dataset designed for the biomolecular domain. Mol-Instructions encompasses three key components: molecule-oriented instructions, protein-oriented instructions, and biomolecular text instructions. Each component aims to improve the understanding and prediction capabilities of LLMs concerning biomolecular features and behaviors. Through extensive instruction tuning experiments on LLMs, we demonstrate the effectiveness of Mol-Instructions in enhancing large models' performance in the intricate realm of biomolecular studies, thus fostering progress in the biomolecular research community. Mol-Instructions is publicly available for ongoing research and will undergo regular updates to enhance its applicability. <|reference_end|>",
"<|reference_start|> Self-Alignment with Instruction Backtranslation: We present a scalable method to build a high quality instruction following language model by automatically labelling human-written text with corresponding instructions. Our approach, named instruction backtranslation, starts with a language model finetuned on a small amount of seed data, and a given web corpus. The seed model is used to construct training examples by generating instruction prompts for web documents (self-augmentation), and then selecting high quality examples from among these candidates (self-curation). This data is then used to finetune a stronger model. Finetuning LLaMa on two iterations of our approach yields a model that outperforms all other LLaMa-based models on the Alpaca leaderboard not relying on distillation data, demonstrating highly effective self-alignment. <|reference_end|>",
"<|reference_start|> PMC-LLaMA: toward building open-source language models for medicine: OBJECTIVE\nRecently, large language models (LLMs) have showcased remarkable capabilities in natural language understanding. While demonstrating proficiency in everyday conversations and question-answering (QA) situations, these models frequently struggle in domains that require precision, such as medical applications, due to their lack of domain-specific knowledge. In this article, we describe the procedure for building a powerful, open-source language model specifically designed for medicine applications, termed as PMC-LLaMA.\n\n\nMATERIALS AND METHODS\nWe adapt a general-purpose LLM toward the medical domain, involving data-centric knowledge injection through the integration of 4.8M biomedical academic papers and 30K medical textbooks, as well as comprehensive domain-specific instruction fine-tuning, encompassing medical QA, rationale for reasoning, and conversational dialogues with 202M tokens.\n\n\nRESULTS\nWhile evaluating various public medical QA benchmarks and manual rating, our lightweight PMC-LLaMA, which consists of only 13B parameters, exhibits superior performance, even surpassing ChatGPT. All models, codes, and datasets for instruction tuning will be released to the research community.\n\n\nDISCUSSION\nOur contributions are 3-fold: (1) we build up an open-source LLM toward the medical domain. We believe the proposed PMC-LLaMA model can promote further development of foundation models in medicine, serving as a medical trainable basic generative language backbone; (2) we conduct thorough ablation studies to demonstrate the effectiveness of each proposed component, demonstrating how different training data and model scales affect medical LLMs; (3) we contribute a large-scale, comprehensive dataset for instruction tuning.\n\n\nCONCLUSION\nIn this article, we systematically investigate the process of building up an open-source medical-specific LLM, PMC-LLaMA. <|reference_end|>"
] | [
1,
4,
22,
32
] | {"<|cite_1|>": "ss-1927039", "<|multi_cite_2_1|>": "arxiv-491884", "<|multi_cite_2_2|>": "arxiv-481289", "<|cite_3|>": "arxiv-532800", "<|multi_cite_4_1|>": "ss-1167696", "<|multi_cite_4_2|>": "arxiv-585476", "<|multi_cite_5_1|>": "ss-844569", "<|multi_cite_5_2|>": "ss-1252776", "<|cite_6|>": "arxiv-559643", "<|multi_cite_7_1|>": "arxiv-364691", "<|multi_cite_7_2|>": "arxiv-374481", "<|multi_cite_7_3|>": "arxiv-335354", "<|multi_cite_7_4|>": "arxiv-559643", "<|multi_cite_8_1|>": "arxiv-364691", "<|multi_cite_8_2|>": "arxiv-455741", "<|multi_cite_8_3|>": "arxiv-374481", "<|multi_cite_8_4|>": "arxiv-335354", "<|multi_cite_31_2|>": "arxiv-506251", "<|multi_cite_9_1|>": "arxiv-470888", "<|multi_cite_9_3|>": "arxiv-590220", "<|multi_cite_10_1|>": "arxiv-508035", "<|multi_cite_10_2|>": "arxiv-497704", "<|multi_cite_10_3|>": "arxiv-530534", "<|multi_cite_11_1|>": "arxiv-462693", "<|multi_cite_11_2|>": "arxiv-534133", "<|multi_cite_12_1|>": "ss-1318386", "<|multi_cite_12_2|>": "arxiv-613492", "<|multi_cite_12_3|>": "arxiv-582347", "<|multi_cite_12_4|>": "arxiv-532150", "<|multi_cite_12_5|>": "arxiv-592197", "<|multi_cite_12_6|>": "ss-1855077", "<|multi_cite_13_1|>": "ss-678612", "<|multi_cite_13_2|>": "ss-1347613", "<|multi_cite_13_3|>": "arxiv-572370", "<|multi_cite_14_1|>": "arxiv-585476", "<|multi_cite_14_2|>": "arxiv-584176", "<|multi_cite_15_1|>": "ss-1167696", "<|multi_cite_15_2|>": "arxiv-554208", "<|cite_16|>": "ss-888331", "<|cite_17|>": "arxiv-575735", "<|cite_18|>": "arxiv-288666", "<|cite_19|>": "arxiv-533927", "<|cite_20|>": "ss-727034", "<|cite_22|>": "arxiv-560318", "<|multi_cite_23_1|>": "arxiv-558915", "<|multi_cite_23_2|>": "arxiv-497537", "<|multi_cite_23_3|>": "arxiv-552270", "<|multi_cite_23_4|>": "arxiv-545639", "<|cite_24|>": "arxiv-209520", "<|cite_25|>": "arxiv-281833", "<|cite_26|>": "ss-678612", "<|cite_27|>": "ss-832115", "<|cite_28|>": "ss-690282", "<|cite_29|>": "arxiv-462693", "<|cite_30|>": "arxiv-560460"} |
1807.00993 | <|paper_start|> Title: Improved training of neural trans-dimensional random field language models with dynamic noise-contrastive estimation
Abstract: Improved training of neural trans-dimensional random field language models with dynamic noise-contrastive estimation: A new whole-sentence language model - neural trans-dimensional random field language model (neural TRF LM), where sentences are modeled as a collection of random fields, and the potential function is defined by a neural network, has been introduced and successfully trained by noise-contrastive estimation (NCE). In this paper, we extend NCE and propose dynamic noise-contrastive estimation (DNCE) to solve the two problems observed in NCE training. First, a dynamic noise distribution is introduced and trained simultaneously to converge to the data distribution. This helps to significantly cut down the noise sample number used in NCE and reduce the training cost. Second, DNCE discriminates between sentences generated from the noise distribution and sentences generated from the interpolation of the data distribution and the noise distribution. This alleviates the overfitting problem caused by the sparseness of the training set. With DNCE, we can successfully and efficiently train neural TRF LMs on large corpus (about 0.8 billion words) with large vocabulary (about 568 K words). Neural TRF LMs perform as good as LSTM LMs with less parameters and being 5x~114x faster in rescoring sentences. Interpolating neural TRF LMs with LSTM LMs and n-gram LMs can further reduce the error rates.
Introduction
\label{sec:intro}
Statistical language models (LMs), which estimate the probability of a sentence, are an important component in various applications, such as automatic speech recognition (ASR) and machine translation (MT).
Most LMs, including the classical n-gram LMs <|cite_start|> (Reference: An Empirical Study of Smoothing Techniques for Language Modeling: We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.) <|cite_end|> and the recurrent neural network LMs <|cite_start|> (Reference: Extensions of recurrent neural network language model: We present several modifications of the original recurrent neural network language model (RNN LM).While this model has been shown to significantly outperform many competitive language modeling techniques in terms of accuracy, the remaining problem is the computational complexity. In this work, we show approaches that lead to more than 15 times speedup for both training and testing phases. Next, we show importance of using a backpropagation through time algorithm. An empirical comparison with feedforward networks is also provided. In the end, we discuss possibilities how to reduce the amount of parameters in the model. The resulting RNN model can thus be smaller, faster both during training and testing, and more accurate than the basic one.) <|cite_end|>, follow the directed graphical modeling approach, where the probability of a sentence is calculated as the product of local conditionals.
Recently, there are increasing interests in investigating whole-sentence LMs <|cite_start|> (Reference: Trans-dimensional Random Fields for Language Modeling: Language modeling (LM) involves determining the joint probability of words in a sentence. The conditional approach is dominant, representing the joint probability in terms of conditionals. Examples include n-gram LMs and neural network LMs. An alternative approach, called the random field (RF) approach, is used in whole-sentence maximum entropy (WSME) LMs. Although the RF approach has potential benefits, the empirical results of previous WSME models are not satisfactory. In this paper, we revisit the RF approach for language modeling, with a number of innovations. We propose a trans-dimensional RF (TDRF) model and develop a training algorithm using joint stochastic approximation and trans-dimensional mixture sampling. We perform speech recognition experiments on Wall Street Journal data, and find that our TDRF models lead to performances as good as the recurrent neural network LMs but are computationally more efficient in computing sentence probability.) <|cite_end|> <|cite_start|> (Reference: Learning trans-dimensional random fields with applications to language modeling: To describe trans-dimensional observations in sample spaces of different dimensions, we propose a probabilistic model, called the trans-dimensional random field (TRF) by explicitly mixing a collection of random fields. In the framework of stochastic approximation (SA), we develop an effective training algorithm, called augmented SA, which jointly estimates the model parameters and normalizing constants while using trans-dimensional mixture sampling to generate observations of different dimensions. Furthermore, we introduce several statistical and computational techniques to improve the convergence of the training algorithm and reduce computational cost, which together enable us to successfully train TRF models on large datasets. The new model and training algorithm are thoroughly evaluated in a number of experiments. The word morphology experiment provides a benchmark test to study the convergence of the training algorithm and to compare with other algorithms, because log-likelihoods and gradients can be exactly calculated in this experiment. For language modeling, our experiments demonstrate the superiority of the TRF approach in being computationally more efficient in computing data probabilities by avoiding local normalization and being able to flexibly integrate a richer set of features, when compared with n-gram models and neural network models.) <|cite_end|> <|cite_start|> (Reference: Language modeling with Neural trans-dimensional random fields: Trans-dimensional random field language models (TRF LMs) have recently been introduced, where sentences are modeled as a collection of random fields. The TRF approach has been shown to have the advantages of being computationally more efficient in inference than LSTM LMs with close performance and being able to flexibly integrating rich features. In this paper we propose neural TRFs, beyond of the previous discrete TRFs that only use linear potentials with discrete features. The idea is to use nonlinear potentials with continuous features, implemented by neural networks (NNs), in the TRF framework. Neural TRFs combine the advantages of both NNs and TRFs. The benefits of word embedding, nonlinear feature learning and larger context modeling are inherited from the use of NNs. At the same time, the strength of efficient inference by avoiding expensive softmax is preserved. A number of technical contributions, including employing deep convolutional neural networks (CNNs) to define the potentials and incorporating the joint stochastic approximation (JSA) strategy in the training algorithm, are developed in this work, which enable us to successfully train neural TRF LMs. Various LMs are evaluated in terms of speech recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The results show that neural TRF LMs not only improve over discrete TRF LMs, but also perform slightly better than LSTM LMs with only one fifth of parameters and 16x faster inference efficiency.) <|cite_end|> <|cite_start|> (Reference: Learning neural trans-dimensional random field language models with noise-contrastive estimation: Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline.) <|cite_end|> <|cite_start|> (Reference: Whole Sentence Neural Language Models: Recurrent neural networks have become increasingly popular for the task of language modeling achieving impressive gains in state-of-the-art speech recognition and natural language processing (NLP) tasks. Recurrent models exploit word dependencies over a much longer context window (as retained by the history states) than what is feasible with n-gram language models. However the training criterion of choice for recurrent language models continues to be the local conditional likelihood of generating the current word given the (pos-sibly long) word context, thus making local decisions at each word. This locally-conditional design fundamentally limits the ability of the model in exploiting whole sentence structures. In this paper, we present our initial results at whole sentence neural language models which assign a probability to the entire word sequence. We extend the previous work on whole sentence maximum entropy models to recurrent language models while using Noise Contrastive Estimation (NCE) for training, as these sentence models are fundamentally un-normalizable. We present results on a range of tasks: from sequence identification tasks such as, palindrome detection to large vocabulary automatic speech recognition (LVCSR) and demonstrate the modeling power of this approach.) <|cite_end|>, which directly model the joint probability of a whole sentence without local normalizations.
Typically, trans-dimensional random field (TRF) LMs <|cite_start|> (Reference: Trans-dimensional Random Fields for Language Modeling: Language modeling (LM) involves determining the joint probability of words in a sentence. The conditional approach is dominant, representing the joint probability in terms of conditionals. Examples include n-gram LMs and neural network LMs. An alternative approach, called the random field (RF) approach, is used in whole-sentence maximum entropy (WSME) LMs. Although the RF approach has potential benefits, the empirical results of previous WSME models are not satisfactory. In this paper, we revisit the RF approach for language modeling, with a number of innovations. We propose a trans-dimensional RF (TDRF) model and develop a training algorithm using joint stochastic approximation and trans-dimensional mixture sampling. We perform speech recognition experiments on Wall Street Journal data, and find that our TDRF models lead to performances as good as the recurrent neural network LMs but are computationally more efficient in computing sentence probability.) <|cite_end|> <|cite_start|> (Reference: Learning trans-dimensional random fields with applications to language modeling: To describe trans-dimensional observations in sample spaces of different dimensions, we propose a probabilistic model, called the trans-dimensional random field (TRF) by explicitly mixing a collection of random fields. In the framework of stochastic approximation (SA), we develop an effective training algorithm, called augmented SA, which jointly estimates the model parameters and normalizing constants while using trans-dimensional mixture sampling to generate observations of different dimensions. Furthermore, we introduce several statistical and computational techniques to improve the convergence of the training algorithm and reduce computational cost, which together enable us to successfully train TRF models on large datasets. The new model and training algorithm are thoroughly evaluated in a number of experiments. The word morphology experiment provides a benchmark test to study the convergence of the training algorithm and to compare with other algorithms, because log-likelihoods and gradients can be exactly calculated in this experiment. For language modeling, our experiments demonstrate the superiority of the TRF approach in being computationally more efficient in computing data probabilities by avoiding local normalization and being able to flexibly integrate a richer set of features, when compared with n-gram models and neural network models.) <|cite_end|> <|cite_start|> (Reference: Language modeling with Neural trans-dimensional random fields: Trans-dimensional random field language models (TRF LMs) have recently been introduced, where sentences are modeled as a collection of random fields. The TRF approach has been shown to have the advantages of being computationally more efficient in inference than LSTM LMs with close performance and being able to flexibly integrating rich features. In this paper we propose neural TRFs, beyond of the previous discrete TRFs that only use linear potentials with discrete features. The idea is to use nonlinear potentials with continuous features, implemented by neural networks (NNs), in the TRF framework. Neural TRFs combine the advantages of both NNs and TRFs. The benefits of word embedding, nonlinear feature learning and larger context modeling are inherited from the use of NNs. At the same time, the strength of efficient inference by avoiding expensive softmax is preserved. A number of technical contributions, including employing deep convolutional neural networks (CNNs) to define the potentials and incorporating the joint stochastic approximation (JSA) strategy in the training algorithm, are developed in this work, which enable us to successfully train neural TRF LMs. Various LMs are evaluated in terms of speech recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The results show that neural TRF LMs not only improve over discrete TRF LMs, but also perform slightly better than LSTM LMs with only one fifth of parameters and 16x faster inference efficiency.) <|cite_end|> <|cite_start|> (Reference: Learning neural trans-dimensional random field language models with noise-contrastive estimation: Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline.) <|cite_end|> are proposed in the undirected graphical modeling approach,
where sentences are modeled as a collection of random fields and the sentence probability is defined in term of potential functions.
TRF LMs can flexibly support any types of discrete or neural network features.
Specifically, the neural TRF LMs <|cite_start|> (Reference: Language modeling with Neural trans-dimensional random fields: Trans-dimensional random field language models (TRF LMs) have recently been introduced, where sentences are modeled as a collection of random fields. The TRF approach has been shown to have the advantages of being computationally more efficient in inference than LSTM LMs with close performance and being able to flexibly integrating rich features. In this paper we propose neural TRFs, beyond of the previous discrete TRFs that only use linear potentials with discrete features. The idea is to use nonlinear potentials with continuous features, implemented by neural networks (NNs), in the TRF framework. Neural TRFs combine the advantages of both NNs and TRFs. The benefits of word embedding, nonlinear feature learning and larger context modeling are inherited from the use of NNs. At the same time, the strength of efficient inference by avoiding expensive softmax is preserved. A number of technical contributions, including employing deep convolutional neural networks (CNNs) to define the potentials and incorporating the joint stochastic approximation (JSA) strategy in the training algorithm, are developed in this work, which enable us to successfully train neural TRF LMs. Various LMs are evaluated in terms of speech recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The results show that neural TRF LMs not only improve over discrete TRF LMs, but also perform slightly better than LSTM LMs with only one fifth of parameters and 16x faster inference efficiency.) <|cite_end|> <|cite_start|> (Reference: Learning neural trans-dimensional random field language models with noise-contrastive estimation: Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline.) <|cite_end|>, whose potential function is defined by a neural network, have been shown to outperform the classical n-gram LMs significantly, and perform close to LSTM LMs but are computational more efficient in computing sentence probabilities.
Training neural TRF LMs is challenging, especially on large corpus with large vocabulary.
In <|cite_start|> (Reference: Learning neural trans-dimensional random field language models with noise-contrastive estimation: Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline.) <|cite_end|>, noise-contrastive estimation (NCE) <|cite_start|> (Reference: Noise-Contrastive Estimation: A new estimation principle for unnormalized statistical models: We present a new estimation principle for parameterized statistical models. The idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise, using the model log-density function in the regression nonlinearity. We show that this leads to a consistent (convergent) estimator of the parameters, and analyze the asymptotic variance. In particular, the method is shown to directly work for unnormalized models, i.e. models where the density function does not integrate to one. The normalization constant can be estimated just like any other parameter. For a tractable ICA model, we compare the method with other estimation methods that can be used to learn unnormalized models, including score matching, contrastive divergence, and maximum-likelihood where the normalization constant is estimated with importance sampling. Simulations show that noise-contrastive estimation offers the best trade-off between computational and statistical efficiency. The method is then applied to the modeling of natural images: We show that the method can successfully estimate a large-scale two-layer model and a Markov random field.) <|cite_end|> is introduced to train neural TRF LMs,
by discriminating between real sentences drawn from the data distribution and noise sentences generated from a noise distribution.
However, the NCE training is found to have the following two problems.
First, reliable NCE needs to generate a large number of noise sentences from the noise distribution, especially when the noise distribution is not close to the data distribution.
However, the time and memory cost for gradient calculation are almost linearly increased with the number of noise samples.
In <|cite_start|> (Reference: Learning neural trans-dimensional random field language models with noise-contrastive estimation: Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline.) <|cite_end|>, the noise distribution is defined by a bigram LM, which is far from the data distribution. For each real sentence, 20 noise sentences are generated from the bigram LM, which is highly undesirable.
Second, the consistency property of NCE holds when an arbitrarily large number of real sentences could be drawn from the true but unknown data distribution.
In practice, real sentences are sampled from the empirical distribution (namely the training set), which is rather sparse considering the high-dimensionality of sentences.
The model estimated by NCE is thus easily overfitted to the empirical distribution.
Due to the two problems, the neural TRF LMs in <|cite_start|> (Reference: Learning neural trans-dimensional random field language models with noise-contrastive estimation: Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline.) <|cite_end|> are defined in the form of exponential tilting of a reference LSTM LM and consequently loss the advantage of the efficient inference.
In this paper, we propose dynamic noise-contrastive estimation (DNCE), which consists of two extensions beyond of the original NCE algorithm to address the above two problems respectively and thus significantly improves the training of neural TRF LMs.
First, a dynamic noise distribution is introduced and trained simultaneously by minimizing the Kullback-Leibler (KL) divergence between the noise distribution and the data distribution.
With a noise distribution that is close to the data distribution, NCE can achieve reliable model estimation even using a small number of noise sentences.
Second, DNCE discriminates between noise sentences generated from the dynamic noise distribution and sentences generated from the interpolation of the data distribution and the noise distribution.
Intuitively, this increases the size of training set by adding noise sentences (which are asymptotically distributed according to the data distribution) and alleviates the overfitting problem caused by the sparseness of the training set.
Three speech recognition experiments are conducted to evaluate the neural TRF LMs with DNCE training.
First, various LMs are trained on Wall Street Journal (WSJ) portion of Penn Treebank (PTB) English dataset and then used to rescore the 1000-best list generated from the WSJ'92 test set, with the same experimental setup as in <|cite_start|> (Reference: Learning trans-dimensional random fields with applications to language modeling: To describe trans-dimensional observations in sample spaces of different dimensions, we propose a probabilistic model, called the trans-dimensional random field (TRF) by explicitly mixing a collection of random fields. In the framework of stochastic approximation (SA), we develop an effective training algorithm, called augmented SA, which jointly estimates the model parameters and normalizing constants while using trans-dimensional mixture sampling to generate observations of different dimensions. Furthermore, we introduce several statistical and computational techniques to improve the convergence of the training algorithm and reduce computational cost, which together enable us to successfully train TRF models on large datasets. The new model and training algorithm are thoroughly evaluated in a number of experiments. The word morphology experiment provides a benchmark test to study the convergence of the training algorithm and to compare with other algorithms, because log-likelihoods and gradients can be exactly calculated in this experiment. For language modeling, our experiments demonstrate the superiority of the TRF approach in being computationally more efficient in computing data probabilities by avoiding local normalization and being able to flexibly integrate a richer set of features, when compared with n-gram models and neural network models.) <|cite_end|> <|cite_start|> (Reference: Language modeling with Neural trans-dimensional random fields: Trans-dimensional random field language models (TRF LMs) have recently been introduced, where sentences are modeled as a collection of random fields. The TRF approach has been shown to have the advantages of being computationally more efficient in inference than LSTM LMs with close performance and being able to flexibly integrating rich features. In this paper we propose neural TRFs, beyond of the previous discrete TRFs that only use linear potentials with discrete features. The idea is to use nonlinear potentials with continuous features, implemented by neural networks (NNs), in the TRF framework. Neural TRFs combine the advantages of both NNs and TRFs. The benefits of word embedding, nonlinear feature learning and larger context modeling are inherited from the use of NNs. At the same time, the strength of efficient inference by avoiding expensive softmax is preserved. A number of technical contributions, including employing deep convolutional neural networks (CNNs) to define the potentials and incorporating the joint stochastic approximation (JSA) strategy in the training algorithm, are developed in this work, which enable us to successfully train neural TRF LMs. Various LMs are evaluated in terms of speech recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The results show that neural TRF LMs not only improve over discrete TRF LMs, but also perform slightly better than LSTM LMs with only one fifth of parameters and 16x faster inference efficiency.) <|cite_end|>.
Then LMs are evaluated in the speech recognition experiment on HKUST Chinese dataset <|cite_start|> (Reference: HKUST/MTS: A Very Large Scale Mandarin Telephone Speech Corpus: ) <|cite_end|>.
The above two experiments demonstrate the language independence in applying neural TRF LMs.
The neural TRF LMs outperform the classical 5-gram LMs significantly, and perform as good as the LSTM LMs but are computational more efficient (5x to 114x faster) than LSTM LMs even when the vocabulary size is not large (4 K to 10 K).
Finally, to evaluate the scalability of neural TRF LMs and DNCE, we conduct the experiment on the Google one-billion benchmark dataset <|cite_start|> (Reference: One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling: We propose a new benchmark corpus to be used for measuring progress in statistical language modeling. With almost one billion words of training data, we hope this benchmark will be useful to quickly evaluate novel language modeling techniques, and to compare their contribution when combined with other advanced techniques. We show performance of several well-known types of language models, with the best results achieved with a recurrent neural network based language model. The baseline unpruned Kneser-Ney 5-gram model achieves perplexity 67.6; a combination of techniques leads to 35% reduction in perplexity, or 10% reduction in cross-entropy (bits), over that baseline. The benchmark is available as a code.google.com project; besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the baseline n-gram models.) <|cite_end|>, which contains about 0.8 billion training words with a vocabulary of about 568 K words.
Compared to a large LSTM LM with adaptive softmax <|cite_start|> (Reference: Efficient softmax approximation for GPUs: We propose an approximate strategy to efficiently train neural network based language models over very large vocabularies. Our approach, called adaptive softmax, circumvents the linear dependency on the vocabulary size by exploiting the unbalanced word distribution to form clusters that explicitly minimize the expectation of computation time. Our approach further reduces the computational time by exploiting the specificities of modern architectures and matrix-matrix vector operations, making it particularly suited for graphical processing units. Our experiments carried out on standard benchmarks, such as EuroParl and One Billion Word, show that our approach brings a large gain in efficiency over standard approximations while achieving an accuracy close to that of the full softmax. The code of our method is available at https://github.com/facebookresearch/adaptive-softmax.) <|cite_end|>, the neural TRF LM achieves a slightly lower WER and is also 54x faster in rescoring the n-best list.
Moreover, combing the neural TRF LMs with LSTM LMs and n-gram LMs can further reduce the error rates.
The source codes of all the experiments can be obtained in \url{https://github.com/wbengine/TRF-NN-Tensorflow}
The rest of the paper is organized as follows.
We discuss related work in \secref{sec:relate} and present basics about neural TRF LMs and NCE in \secref{sec:background}.
The proposed DNCE method is described in \secref{sec:dnce}.
After presenting the three experiments in \secref{sec:exps}, the conclusions are made in \secref{sec:conclusion}.
Related Work
\label{sec:background}
\subsection{Neural trans-dimensional random field LMs}
\label{sec:trf}
As in <|cite_start|> (Reference: Learning neural trans-dimensional random field language models with noise-contrastive estimation: Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline.) <|cite_end|>, the joint probability of a sequence $x^l$ and its length $l$ is assumed to be distributed from an exponential family model:
\begin{equation}\label{eq:trf}
p(l, x^l;\theta) = \frac{\pi_l}{Z_l(\theta)} e^{\phi(x^l;\theta)}
\end{equation}
where $x^l=(x_1, \ldots, x_l)$ is a word sequence of length $l$ ($l=1, \ldots, m$),
$\pi_l$ is the prior length probability,
$\theta$ indicates the set of parameters,
$Z_l(\theta)$ is the normalization constant of length $l$, i.e. $Z_l(\theta) = \sum_{x^l} e^{\phi(x^l; \theta)}$.
$\phi$ is the potential function, which can be defined by neural networks.
In this paper, different from the model definitions in <|cite_start|> (Reference: Language modeling with Neural trans-dimensional random fields: Trans-dimensional random field language models (TRF LMs) have recently been introduced, where sentences are modeled as a collection of random fields. The TRF approach has been shown to have the advantages of being computationally more efficient in inference than LSTM LMs with close performance and being able to flexibly integrating rich features. In this paper we propose neural TRFs, beyond of the previous discrete TRFs that only use linear potentials with discrete features. The idea is to use nonlinear potentials with continuous features, implemented by neural networks (NNs), in the TRF framework. Neural TRFs combine the advantages of both NNs and TRFs. The benefits of word embedding, nonlinear feature learning and larger context modeling are inherited from the use of NNs. At the same time, the strength of efficient inference by avoiding expensive softmax is preserved. A number of technical contributions, including employing deep convolutional neural networks (CNNs) to define the potentials and incorporating the joint stochastic approximation (JSA) strategy in the training algorithm, are developed in this work, which enable us to successfully train neural TRF LMs. Various LMs are evaluated in terms of speech recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The results show that neural TRF LMs not only improve over discrete TRF LMs, but also perform slightly better than LSTM LMs with only one fifth of parameters and 16x faster inference efficiency.) <|cite_end|> <|cite_start|> (Reference: Learning neural trans-dimensional random field language models with noise-contrastive estimation: Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline.) <|cite_end|>, we simplify the neural network architecture and define the potential function by a bidirectional LSTM as shown in \figref{fig:rnn-trf}.
Compared with the bidirectional LSTM LMs in <|cite_start|> (Reference: Investigating Bidirectional Recurrent Neural Network Language Models for Speech Recognition: Copyright © 2017 ISCA. Recurrent neural network language models (RNNLMs) are powerful language modeling techniques. Significant performance improvements have been reported in a range of tasks including speech recognition compared to n-gram language models. Conventional n-gram and neural network language models are trained to predict the probability of the next word given its preceding context history. In contrast, bidirectional recurrent neural network based language models consider the context from future words as well. This complicates the inference process, but has theoretical benefits for tasks such as speech recognition as additional context information can be used. However to date, very limited or no gains in speech recognition performance have been reported with this form of model. This paper examines the issues of training bidirectional recurrent neural network language models (bi-RNNLMs) for speech recognition. A bi-RNNLM probability smoothing technique is proposed, that addresses the very sharp posteriors that are often observed in these models. The performance of the bi-RNNLMs is evaluated on three speech recognition tasks: broadcast news; meeting transcription (AMI); and low-resource systems (Babel data). On all tasks gains are observed by applying the smoothing technique to the bi-RNNLM. In addition consistent performance gains can be obtained by combining bi-RNNLMs with n-gram and uni-directional RNNLMs.) <|cite_end|> <|cite_start|> (Reference: On Training Bi-directional Neural Network Language Model with Noise Contrastive Estimation: We propose to train bi-directional neural network language model(NNLM) with noise contrastive estimation(NCE). Experiments are conducted on a rescore task on the PTB data set. It is shown that NCE-trained bi-directional NNLM outperformed the one trained by conventional maximum likelihood training. But still(regretfully), it did not out-perform the baseline uni-directional NNLM.) <|cite_end|>, this neural TRF LM provides a theoretical-solid framework to incorporate the bidirectional LSTM features.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{rnn-trf}
\caption{The bidirectional LSTM used to define the potential function $\phi(x^l;\theta)$ in neural TRFs.}\label{fig:rnn-trf}
\vskip -0.1in
\end{figure}
The bidirectional LSTM based potential function is detailed as follows.
First, each word $x_i$ ($i=1,\ldots,l$) in a sentence is mapped to an embedded vector $e_i \in R^d$.
Then the word embedding vectors are fed into a bidirectional LSTM to extract the long-range sequential features from the forward and backward contexts.
Denote by $h_{f,i}, h_{b,i} \in R^d$ the hidden vectors of the forward and backward LSTMs respectively at position $i$.
Finally, we calculate the inner product of the hidden vector of the forward LSTM at current position and the embedding vector at the next position,
and calculate the inner product of the hidden vector of the backward LSTM at current position and the embedding vector at the pervious position (dash line in \figref{fig:rnn-trf}).
The potential function $\phi(x^l;\theta)$ is computed by summating all the inner products:
\begin{equation}\label{eq:phi}
\phi(x^l;\theta) = \sum_{i=1}^{l-1} h_{f,i}^T e_{i+1} + \sum_{i=2}^{l} h_{b,i}^T e_{i-1}
\end{equation}
where $\theta$ denotes all the parameters in the neural network.
\subsection{Noise-contrastive estimation (NCE)}
Noise-contrastive estimation (NCE) is proposed in <|cite_start|> (Reference: Noise-Contrastive Estimation: A new estimation principle for unnormalized statistical models: We present a new estimation principle for parameterized statistical models. The idea is to perform nonlinear logistic regression to discriminate between the observed data and some artificially generated noise, using the model log-density function in the regression nonlinearity. We show that this leads to a consistent (convergent) estimator of the parameters, and analyze the asymptotic variance. In particular, the method is shown to directly work for unnormalized models, i.e. models where the density function does not integrate to one. The normalization constant can be estimated just like any other parameter. For a tractable ICA model, we compare the method with other estimation methods that can be used to learn unnormalized models, including score matching, contrastive divergence, and maximum-likelihood where the normalization constant is estimated with importance sampling. Simulations show that noise-contrastive estimation offers the best trade-off between computational and statistical efficiency. The method is then applied to the modeling of natural images: We show that the method can successfully estimate a large-scale two-layer model and a Markov random field.) <|cite_end|> for learning unnormalized statistical models.
Its basic idea is ``learning by comparison'', i.e. to perform nonlinear logistic regression to discriminate between data samples drawn from the data distribution and noise samples drawn from a known noise distribution.
An advantage of NCE is that the normalization constants can be treated as the normal parameters and updated together with the model parameters.
To apply NCE to estimate neural TRF LMs defined in \eqref{eq:trf}, we treat the logarithmic normalization constants $\log Z_l$, $l=1, \ldots, m$ as parameters and rewrite \eqref{eq:trf} in the following form:
\begin{equation}\label{eq:trf-nce}
p(l,x^l;\hat \theta) = \pi_l e^{\hat \phi(l,x^l; \hat \theta)}.
\end{equation}
Here $\hat \phi(l,x^l;\hat{\theta}) = \phi(x^l;\theta) - \log Z_l$,
and $\hat \theta = (\theta, \log Z_1,$ $\ldots, \log Z_m)$ consists of the parameters of the potential function and the normalization constants, which can be estimated together in NCE.
There are three distributions involved in NCE -- the true but unknown data distribution denoted by $p_d(l,x^l)$, the model distribution $p(l,x^l;\hat\theta)$ in \eqref{eq:trf-nce} and a fixed noise distribution denoted by $p_n(l,x^l)$, which is defined as a bigram LM in <|cite_start|> (Reference: Learning neural trans-dimensional random field language models with noise-contrastive estimation: Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline.) <|cite_end|>.
Consider the binary classification of a sentence $(l,x^l)$ coming from two classes - from the data distribution ($C=0$) and from the noise distribution ($C=1$), where $C$ is the class label.
Assume that the ratio between the prior probabilities is $1:\nu$, and the class-conditional probability for $C=0$ is modeled by $p(l,x^l;\hat\theta)$.
Then the posterior probabilities can be calculated respectively as follows:
\begin{align}
P(C=0|l, x^l; \hat \theta) &= \frac{p(l, x^l; \hat \theta)}{p(l, x^l; \hat \theta) + \nu p_n(l, x^l)} \label{eq:pc0} \\
P(C=1|l, x^l; \hat \theta) &= 1 - P(C=0|l, x^l; \hat \theta) \label{eq:pc1}
\end{align}
NCE estimates the model distribution by maximizing the following conditional log-likelihood:
\begin{equation} \label{eq:j}
\begin{split}
J(\hat \theta) = \sum_{l=1}^{m} \sum_{x^l} p_d(l,x^l) \log P(C=0|l,x^l;\hat\theta) + \\
\nu \sum_{l=1}^{m} \sum_{x^l} p_n(l,x^l)\log P(C=1|l,x^l;\hat\theta)
\end{split}
\end{equation}
$J(\hat \theta)$ is the summation of two expectations.
The first is the expectation with respect to (w.r.t.) the data distribution $p_d(l,x^l)$, which can be approximated by randomly selecting sentences from the training set.
The second is the expectation w.r.t. the noise distribution $p_n(l,x^l)$, which can be computed by drawing sentences from the noise distribution itself.
Denote by $D$ and $B$ the data set and the sample set at current iteration,
and by $|D|$ and $|B|$ the number of sentences in $D$ and $B$, respectively, satisfying $\nu = |B|/|D|$.
The gradient with respect to $\hat\theta$ can be computed as follows:
\begin{equation} \label{eq:grad}
\begin{split}
\frac{\partial J(\hat\theta)}{\partial \hat\theta} =
\frac{1}{|D|} \sum_{(l,x^l) \in D}P(C=1|l,x^l;\hat\theta) \frac{\partial \hat\phi(l, x^l; \hat\theta)}{\partial \hat\theta} \\
- \frac{\nu}{|B|} \sum_{(l,x^l) \in B} P(C=0|l,x^l;\hat\theta) \frac{\partial \hat\phi(l, x^l; \hat\theta)}{\partial \hat\theta}
\end{split}
\end{equation}
The gradient of the potential function $\hat\phi(l, x^l;\hat\theta)$ w.r.t. the parameters $\theta$ can be efficiently computed through the back-propagation algorithm.
Then any gradient method can be used to optimize the parameters and normalization constants, such as stochastic gradient descent (SGD) or Adam <|cite_start|> (Reference: Adam: A Method for Stochastic Optimization: We introduce Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments. The method is straightforward to implement, is computationally efficient, has little memory requirements, is invariant to diagonal rescaling of the gradients, and is well suited for problems that are large in terms of data and/or parameters. The method is also appropriate for non-stationary objectives and problems with very noisy and/or sparse gradients. The hyper-parameters have intuitive interpretations and typically require little tuning. Some connections to related algorithms, on which Adam was inspired, are discussed. We also analyze the theoretical convergence properties of the algorithm and provide a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework. Empirical results demonstrate that Adam works well in practice and compares favorably to other stochastic optimization methods. Finally, we discuss AdaMax, a variant of Adam based on the infinity norm.) <|cite_end|>. <|paper_end|> | [
"<|reference_start|> Trans-dimensional Random Fields for Language Modeling: Language modeling (LM) involves determining the joint probability of words in a sentence. The conditional approach is dominant, representing the joint probability in terms of conditionals. Examples include n-gram LMs and neural network LMs. An alternative approach, called the random field (RF) approach, is used in whole-sentence maximum entropy (WSME) LMs. Although the RF approach has potential benefits, the empirical results of previous WSME models are not satisfactory. In this paper, we revisit the RF approach for language modeling, with a number of innovations. We propose a trans-dimensional RF (TDRF) model and develop a training algorithm using joint stochastic approximation and trans-dimensional mixture sampling. We perform speech recognition experiments on Wall Street Journal data, and find that our TDRF models lead to performances as good as the recurrent neural network LMs but are computationally more efficient in computing sentence probability. <|reference_end|>",
"<|reference_start|> Language modeling with Neural trans-dimensional random fields: Trans-dimensional random field language models (TRF LMs) have recently been introduced, where sentences are modeled as a collection of random fields. The TRF approach has been shown to have the advantages of being computationally more efficient in inference than LSTM LMs with close performance and being able to flexibly integrating rich features. In this paper we propose neural TRFs, beyond of the previous discrete TRFs that only use linear potentials with discrete features. The idea is to use nonlinear potentials with continuous features, implemented by neural networks (NNs), in the TRF framework. Neural TRFs combine the advantages of both NNs and TRFs. The benefits of word embedding, nonlinear feature learning and larger context modeling are inherited from the use of NNs. At the same time, the strength of efficient inference by avoiding expensive softmax is preserved. A number of technical contributions, including employing deep convolutional neural networks (CNNs) to define the potentials and incorporating the joint stochastic approximation (JSA) strategy in the training algorithm, are developed in this work, which enable us to successfully train neural TRF LMs. Various LMs are evaluated in terms of speech recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The results show that neural TRF LMs not only improve over discrete TRF LMs, but also perform slightly better than LSTM LMs with only one fifth of parameters and 16x faster inference efficiency. <|reference_end|>",
"<|reference_start|> Learning neural trans-dimensional random field language models with noise-contrastive estimation: Trans-dimensional random field language models (TRF LMs) where sentences are modeled as a collection of random fields, have shown close performance with LSTM LMs in speech recognition and are computationally more efficient in inference. However, the training efficiency of neural TRF LMs is not satisfactory, which limits the scalability of TRF LMs on large training corpus. In this paper, several techniques on both model formulation and parameter estimation are proposed to improve the training efficiency and the performance of neural TRF LMs. First, TRFs are reformulated in the form of exponential tilting of a reference distribution. Second, noise-contrastive estimation (NCE) is introduced to jointly estimate the model parameters and normalization constants. Third, we extend the neural TRF LMs by marrying the deep convolutional neural network (CNN) and the bidirectional LSTM into the potential function to extract the deep hierarchical features and bidirectionally sequential features. Utilizing all the above techniques enables the successful and efficient training of neural TRF LMs on a 40x larger training set with only 1/3 training time and further reduces the WER with relative reduction of 4.7% on top of a strong LSTM LM baseline. <|reference_end|>",
"<|reference_start|> Language modeling with Neural trans-dimensional random fields: Trans-dimensional random field language models (TRF LMs) have recently been introduced, where sentences are modeled as a collection of random fields. The TRF approach has been shown to have the advantages of being computationally more efficient in inference than LSTM LMs with close performance and being able to flexibly integrating rich features. In this paper we propose neural TRFs, beyond of the previous discrete TRFs that only use linear potentials with discrete features. The idea is to use nonlinear potentials with continuous features, implemented by neural networks (NNs), in the TRF framework. Neural TRFs combine the advantages of both NNs and TRFs. The benefits of word embedding, nonlinear feature learning and larger context modeling are inherited from the use of NNs. At the same time, the strength of efficient inference by avoiding expensive softmax is preserved. A number of technical contributions, including employing deep convolutional neural networks (CNNs) to define the potentials and incorporating the joint stochastic approximation (JSA) strategy in the training algorithm, are developed in this work, which enable us to successfully train neural TRF LMs. Various LMs are evaluated in terms of speech recognition WERs by rescoring the 1000-best lists of WSJ'92 test data. The results show that neural TRF LMs not only improve over discrete TRF LMs, but also perform slightly better than LSTM LMs with only one fifth of parameters and 16x faster inference efficiency. <|reference_end|>"
] | [
2,
4,
13,
23
] | {"<|cite_1|>": "arxiv-668829", "<|cite_2|>": "ss-1536373", "<|multi_cite_3_1|>": "ss-1063185", "<|multi_cite_3_2|>": "ss-1937859", "<|multi_cite_3_3|>": "arxiv-130064", "<|multi_cite_3_4|>": "arxiv-138535", "<|multi_cite_3_5|>": "ss-1376203", "<|multi_cite_4_1|>": "ss-1063185", "<|multi_cite_4_2|>": "ss-1937859", "<|multi_cite_4_3|>": "arxiv-130064", "<|multi_cite_4_4|>": "arxiv-138535", "<|multi_cite_5_1|>": "arxiv-130064", "<|multi_cite_5_2|>": "arxiv-138535", "<|cite_6|>": "arxiv-138535", "<|cite_7|>": "ss-726001", "<|cite_8|>": "arxiv-138535", "<|cite_9|>": "arxiv-138535", "<|multi_cite_10_1|>": "ss-1937859", "<|multi_cite_10_2|>": "arxiv-130064", "<|cite_11|>": "ss-1376204", "<|cite_12|>": "arxiv-53855", "<|cite_13|>": "arxiv-105790", "<|cite_14|>": "arxiv-138535", "<|multi_cite_15_1|>": "arxiv-130064", "<|multi_cite_15_2|>": "arxiv-138535", "<|multi_cite_16_1|>": "ss-1013766", "<|multi_cite_16_2|>": "arxiv-92515", "<|cite_17|>": "ss-726001", "<|cite_18|>": "arxiv-138535", "<|cite_19|>": "arxiv-70669"} |
1102.4137 | <|paper_start|> Title: Using Distributed Rotations for a Low-Complexity Dynamic Decode-and-Forward Relay Protocol
Abstract: Using Distributed Rotations for a Low-Complexity Dynamic Decode-and-Forward Relay Protocol: In this paper, we propose to implement the dynamic decode-and-forward (DDF) protocol with distributed rotations. In addition to being the first minimum-delay implementation of the DDF protocol proposed for any number of relays, this technique allows to exploit cooperative diversity without inducing the high decoding complexity of a space-time code. The analysis of outage probabilities for different number of relays and rotations shows that the performance of this technique is close to optimal. Moreover, a lower-bound on the diversity-multiplexing gain tradeoff (DMT) is provided in the case of a single relay and two rotations. This lower-bound reaches the optimal DDF's DMT when the frame-length grows to infinity, which shows that even a small number of rotations is enough to obtain good performance.
Introduction
The last decade has witnessed a growing interest in wireless cooperative communications-\nocite{nosratinia04} <|cite_start|> (Reference: Cooperative strategies and capacity theorems for relay networks: Coding strategies that exploit node cooperation are developed for relay networks. Two basic schemes are studied: the relays decode-and-forward the source message to the destination, or they compress-and-forward their channel outputs to the destination. The decode-and-forward scheme is a variant of multihopping, but in addition to having the relays successively decode the message, the transmitters cooperate and each receiver uses several or all of its past channel output blocks to decode. For the compress-and-forward scheme, the relays take advantage of the statistical dependence between their channel outputs and the destination's channel output. The strategies are applied to wireless channels, and it is shown that decode-and-forward achieves the ergodic capacity with phase fading if phase information is available only locally, and if the relays are near the source node. The ergodic capacity coincides with the rate of a distributed antenna array with full cooperation even though the transmitting antennas are not colocated. The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, single-bounce fading, certain quasi-static fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted. The results further extend to multisource and multidestination networks such as multiaccess and broadcast relay channels.) <|cite_end|>. Indeed wireless nodes cannot always be equipped with several antennas due to size, cost or hardware limitations. But some diversity can still be exploited by considering the virtual multiple antenna array formed by several nodes of the network and using distributed transmission techniques.
Cooperative protocols have been classified into different families according to the processing performed at relays. One of these families is formed by the decode-and-forward (DF) protocols. In theory, these protocols could bring significant improvements in performance thanks to the noise deletion at relays. However, they are limited by the source-relay capacities since relays have to decode the signals correctly to be able to forward them.
The dynamic decode-and-forward (DDF), proposed in <|cite_start|> (Reference: On the Achievable Diversity-Multiplexing Tradeoff in Half-Duplex Cooperative Channels: We propose novel cooperative transmission protocols for delay-limited coherent fading channels consisting of N (half-duplex and single-antenna) partners and one cell site. In our work, we differentiate between the relay, cooperative broadcast (down-link), and cooperative multiple-access (CMA) (up-link) channels. The proposed protocols are evaluated using Zheng-Tse diversity-multiplexing tradeoff. For the relay channel, we investigate two classes of cooperation schemes; namely, amplify and forward (AF) protocols and decode and forward (DF) protocols. For the first class, we establish an upper bound on the achievable diversity-multiplexing tradeoff with a single relay. We then construct a new AF protocol that achieves this upper bound. The proposed algorithm is then extended to the general case with (N-1) relays where it is shown to outperform the space-time coded protocol of Laneman and Wornell without requiring decoding/encoding at the relays. For the class of DF protocols, we develop a dynamic decode and forward (DDF) protocol that achieves the optimal tradeoff for multiplexing gains 0lesrles1/N. Furthermore, with a single relay, the DDF protocol is shown to dominate the class of AF protocols for all multiplexing gains. The superiority of the DDF protocol is shown to be more significant in the cooperative broadcast channel. The situation is reversed in the CMA channel where we propose a new AF protocol that achieves the optimal tradeoff for all multiplexing gains. A distinguishing feature of the proposed protocols in the three scenarios is that they do not rely on orthogonal subspaces, allowing for a more efficient use of resources. In fact, using our results one can argue that the suboptimality of previously proposed protocols stems from their use of orthogonal subspaces rather than the half-duplex constraint.) <|cite_end|>, is based on the idea that each relay should listen till it receives enough information to decode, and retransmit only then. It has been proven to outperform any amplify-and-forward (AF) protocol in terms of diversity-multiplexing gain tradeoff (DMT). However, this protocol is quite complex and providing a simple implementation is still an open problem.
Recently proposed in <|cite_start|> (Reference: Distributed rotation recovers spatial diversity: In relay networks, a conventional way to exploit spatial diversity is to introduce distributed space-time processing at the relays. In our work, we show that even simple time-varying distributed rotation can recover spatial diversity. The main idea is to convert the inherent spatial diversity to time diversity by creating an artificial fast fading channel. It turns out that the proposed framework is both tractable from the theoretical point of view and simple from the practical point of view. Furthermore, the framework is quite general and can be applied to a wide range of linear/nonlinear relaying strategies. As applications, we first propose a linear relaying scheme called rotate-and-forward for multiple-antenna two-hop layered networks. It is shown that, in some non-trivial setting, this scheme outperforms existing schemes and achieves the optimal diversity-multiplexing tradeoff. The second application is a decode-and-forward scheme based on the same idea in the single-antenna multiple-relay channel. It is shown to achieve the maximum diversity with low signaling complexity.) <|cite_end|>, distributed rotations is a new technique to exploit spatial diversity without adding the decoding complexity of a space-time code. In <|cite_start|> (Reference: Distributed rotation recovers spatial diversity: In relay networks, a conventional way to exploit spatial diversity is to introduce distributed space-time processing at the relays. In our work, we show that even simple time-varying distributed rotation can recover spatial diversity. The main idea is to convert the inherent spatial diversity to time diversity by creating an artificial fast fading channel. It turns out that the proposed framework is both tractable from the theoretical point of view and simple from the practical point of view. Furthermore, the framework is quite general and can be applied to a wide range of linear/nonlinear relaying strategies. As applications, we first propose a linear relaying scheme called rotate-and-forward for multiple-antenna two-hop layered networks. It is shown that, in some non-trivial setting, this scheme outperforms existing schemes and achieves the optimal diversity-multiplexing tradeoff. The second application is a decode-and-forward scheme based on the same idea in the single-antenna multiple-relay channel. It is shown to achieve the maximum diversity with low signaling complexity.) <|cite_end|>, the authors show that this technique is optimal in terms of DMT for the two-hop multiple-relay channel using an amplify-and-forward strategy.
In this paper, we propose to implement a low-complexity DDF protocol with distributed rotations and analyze the performance of this scheme in terms of outage probability. We also take a special interest in the case of a single relay and analyze its performance for a small number of rotations in terms of DMT. Finally we discuss the implementation of this protocol and study the impact of transmitting data blocks instead of single symbols. <|paper_end|> | [
"<|reference_start|> Cooperative strategies and capacity theorems for relay networks: Coding strategies that exploit node cooperation are developed for relay networks. Two basic schemes are studied: the relays decode-and-forward the source message to the destination, or they compress-and-forward their channel outputs to the destination. The decode-and-forward scheme is a variant of multihopping, but in addition to having the relays successively decode the message, the transmitters cooperate and each receiver uses several or all of its past channel output blocks to decode. For the compress-and-forward scheme, the relays take advantage of the statistical dependence between their channel outputs and the destination's channel output. The strategies are applied to wireless channels, and it is shown that decode-and-forward achieves the ergodic capacity with phase fading if phase information is available only locally, and if the relays are near the source node. The ergodic capacity coincides with the rate of a distributed antenna array with full cooperation even though the transmitting antennas are not colocated. The capacity results generalize broadly, including to multiantenna transmission with Rayleigh fading, single-bounce fading, certain quasi-static fading problems, cases where partial channel knowledge is available at the transmitters, and cases where local user cooperation is permitted. The results further extend to multisource and multidestination networks such as multiaccess and broadcast relay channels. <|reference_end|>",
"<|reference_start|> On the Achievable Diversity-Multiplexing Tradeoff in Half-Duplex Cooperative Channels: We propose novel cooperative transmission protocols for delay-limited coherent fading channels consisting of N (half-duplex and single-antenna) partners and one cell site. In our work, we differentiate between the relay, cooperative broadcast (down-link), and cooperative multiple-access (CMA) (up-link) channels. The proposed protocols are evaluated using Zheng-Tse diversity-multiplexing tradeoff. For the relay channel, we investigate two classes of cooperation schemes; namely, amplify and forward (AF) protocols and decode and forward (DF) protocols. For the first class, we establish an upper bound on the achievable diversity-multiplexing tradeoff with a single relay. We then construct a new AF protocol that achieves this upper bound. The proposed algorithm is then extended to the general case with (N-1) relays where it is shown to outperform the space-time coded protocol of Laneman and Wornell without requiring decoding/encoding at the relays. For the class of DF protocols, we develop a dynamic decode and forward (DDF) protocol that achieves the optimal tradeoff for multiplexing gains 0lesrles1/N. Furthermore, with a single relay, the DDF protocol is shown to dominate the class of AF protocols for all multiplexing gains. The superiority of the DDF protocol is shown to be more significant in the cooperative broadcast channel. The situation is reversed in the CMA channel where we propose a new AF protocol that achieves the optimal tradeoff for all multiplexing gains. A distinguishing feature of the proposed protocols in the three scenarios is that they do not rely on orthogonal subspaces, allowing for a more efficient use of resources. In fact, using our results one can argue that the suboptimality of previously proposed protocols stems from their use of orthogonal subspaces rather than the half-duplex constraint. <|reference_end|>",
"<|reference_start|> Distributed rotation recovers spatial diversity: In relay networks, a conventional way to exploit spatial diversity is to introduce distributed space-time processing at the relays. In our work, we show that even simple time-varying distributed rotation can recover spatial diversity. The main idea is to convert the inherent spatial diversity to time diversity by creating an artificial fast fading channel. It turns out that the proposed framework is both tractable from the theoretical point of view and simple from the practical point of view. Furthermore, the framework is quite general and can be applied to a wide range of linear/nonlinear relaying strategies. As applications, we first propose a linear relaying scheme called rotate-and-forward for multiple-antenna two-hop layered networks. It is shown that, in some non-trivial setting, this scheme outperforms existing schemes and achieves the optimal diversity-multiplexing tradeoff. The second application is a decode-and-forward scheme based on the same idea in the single-antenna multiple-relay channel. It is shown to achieve the maximum diversity with low signaling complexity. <|reference_end|>",
"<|reference_start|> Distributed rotation recovers spatial diversity: In relay networks, a conventional way to exploit spatial diversity is to introduce distributed space-time processing at the relays. In our work, we show that even simple time-varying distributed rotation can recover spatial diversity. The main idea is to convert the inherent spatial diversity to time diversity by creating an artificial fast fading channel. It turns out that the proposed framework is both tractable from the theoretical point of view and simple from the practical point of view. Furthermore, the framework is quite general and can be applied to a wide range of linear/nonlinear relaying strategies. As applications, we first propose a linear relaying scheme called rotate-and-forward for multiple-antenna two-hop layered networks. It is shown that, in some non-trivial setting, this scheme outperforms existing schemes and achieves the optimal diversity-multiplexing tradeoff. The second application is a decode-and-forward scheme based on the same idea in the single-antenna multiple-relay channel. It is shown to achieve the maximum diversity with low signaling complexity. <|reference_end|>"
] | [
0,
1,
2,
3
] | {"<|cite_2|>": "ss-814646", "<|cite_3|>": "ss-1026699", "<|cite_4|>": "ss-2535868", "<|cite_5|>": "ss-2535868"} |
2408.05315 | <|paper_start|> Title: Omobot: a low-cost mobile robot for autonomous search and fall detection
Abstract: Omobot: a low-cost mobile robot for autonomous search and fall detection: Detecting falls among the elderly and alerting their community responders can save countless lives. We design and develop a low-cost mobile robot that periodically searches the house for the person being monitored and sends an email to a set of designated responders if a fall is detected. In this project, we make three novel design decisions and contributions. First, our custom-designed low-cost robot has advanced features like omnidirectional wheels, the ability to run deep learning models, and autonomous wireless charging. Second, we improve the accuracy of fall detection for the YOLOv8-Pose-nano object detection network by 6% and YOLOv8-Pose-large by 12%. We do so by transforming the images captured from the robot viewpoint (camera height 0.15m from the ground) to a typical human viewpoint (1.5m above the ground) using a principally computed Homography matrix. This improves network accuracy because the training dataset MS-COCO on which YOLOv8-Pose is trained is captured from a human-height viewpoint. Lastly, we improve the robot controller by learning a model that predicts the robot velocity from the input signal to the motor controller.
Introduction
Fall prevention and management among the elderly has been a well-recognized problem since the 1990s <|cite_start|> (Reference: Prevention of Falls and Fall Injuries in Elderly Persons: A Research Agenda: In summary, falls and fall injuries represent common and potentially preventable causes of functional disability, morbidity, and increased health care utilization among elderly persons. While much has been learned about falls and fall injuries over the past decade, much further information is needed in order to identify the optimal prevention strategies. Some of the important questions that remain to be answered are listed in Table 1. While certainly not exhaustive, the topics listed in Table 1 and discussed in this paper provide a beginning template for a research agenda for fall and injury prevention among elderly persons.) <|cite_end|> <|cite_start|> (Reference: Three generations of telecare of the elderly: The increasing number of elderly and infirm people living alone in their own homes is creating the need for new personal emergency response systems based on public telephone and cable networks. While existing systems enable clients to summon help in the event of illness, future services are likely to make use of evolving technologies to provide automatic sensing of emergencies and to predict long-term deterioration in health, using activity profiles. The characteristics and requirements of these second-generation systems are discussed and predictions are made about the innovative services and facilities that may be available in third-generation systems, when broadband communication is available to the home.) <|cite_end|>.
As the proportion of 65 and above population in the OECD countries has grown from 11.36\% in 1990 to 17.96\% in 2022 <|cite_start|> (Reference: Need for public health policies in the elderly population: indicators of aging in a Social Security Institute in Mexico.: Introduction
An integral diagnosis of population contemplates within its components the population demographic analysis that is indispensable in the formulation of public policies. Population policy has a clearly transversal nature, since all actions in the economic, social, political, cultural, geographical, and obviously, demographic fields, have direct or indirect repercussions on it.
Objectives
To determine the population dynamics and the global growth of the older adult population (OAP) of 60 years and more.
Materials and methods
Cross-sectional, retrospective study. The information was obtained from the statistical yearbooks of the institute of security and social services of state workers, Mexico (1999-2015). Several demographic ageing indicators were analyzed.
Results
There was a constant increase in percentage points in the proportion of OAP, index of ageing, demographic dependency ratio of old age, global index of dependence, index of dependence of old people, and index of the active population structure (6, 19.2, 15.5, 8.5, 8.2 and 31.2%, respectively). The indicator global index of dependence and masculinity showed a decrease (0.6 and 3.1%, respectively).
Conclusions
Our data provide evidence that suggests modifying and generating public policies according to OAP.) <|cite_end|>, the importance of fall prevention and management has grown accordingly.
While preventing falls is the first line of defense, the second line of defense managing falls by reducing the response and rescue time <|cite_start|> (Reference: Falls, Fall Prevention, and Fall Detection Technologies: ) <|cite_end|>.
Fall detection systems are essential for the elderly, as they play a critical role in detecting falls promptly and ensuring timely medical intervention. By providing continuous monitoring and immediate assistance, the safety and independence of elderly individuals can be enhanced, while their need for caregivers and healthcare resources can be reduced.
Fall detection technologies that alert caregivers <|cite_start|> (Reference: Three generations of telecare of the elderly: The increasing number of elderly and infirm people living alone in their own homes is creating the need for new personal emergency response systems based on public telephone and cable networks. While existing systems enable clients to summon help in the event of illness, future services are likely to make use of evolving technologies to provide automatic sensing of emergencies and to predict long-term deterioration in health, using activity profiles. The characteristics and requirements of these second-generation systems are discussed and predictions are made about the innovative services and facilities that may be available in third-generation systems, when broadband communication is available to the home.) <|cite_end|> include
user-activated alarm systems <|cite_start|> (Reference: Fall detection devices and their use with older adults: a systematic review: Background:Falls represent a significant threat to the health and independence of adults aged 65 years and older. As a wide variety and large number of passive monitoring systems are currently and increasingly available to detect when individuals have fallen, there is a need to analyze and synthesize the evidence regarding their ability to accurately detect falls to determine which systems are most effective. Objectives:The purpose of this literature review is to systematically assess the current state of design and implementation of fall-detection devices. This review also examines to what extent these devices have been tested in the real world as well as the acceptability of these devices to older adults. Data Sources:A systematic literature review was conducted in PubMed, CINAHL, EMBASE, and PsycINFO from their respective inception dates to June 25, 2013. Study Eligibility Criteria and Interventions:Articles were included if they discussed a project or multiple projects involving a system with the purpose of detecting a fall in adults. It was not a requirement for inclusion in this review that the system targets persons older than 65 years. Articles were excluded if they were not written in English or if they looked at fall risk, fall detection in children, fall prevention, or a personal emergency response device. Study Appraisal and Synthesis Methods:Studies were initially divided into those using sensitivity, specificity, or accuracy in their evaluation methods and those using other methods to evaluate their devices. Studies were further classified into wearable devices and nonwearable devices. Studies were appraised for inclusion of older adults in sample and if evaluation included real-world settings. Results:This review identified 57 projects that used wearable systems and 35 projects using nonwearable systems, regardless of evaluation technique. Nonwearable systems included cameras, motion sensors, microphones, and floor sensors. Of the projects examining wearable systems, only 7.1% reported monitoring older adults in a real-world setting. There were no studies of nonwearable devices that used older adults as subjects in either a laboratory or a real-world setting. In general, older adults appear to be interested in using such devices although they express concerns over privacy and understanding exactly what the device is doing at specific times. Limitations:This systematic review was limited to articles written in English and did not include gray literature. Manual paper screening and review processes may have been subject to interpretive bias. Conclusions and Implications of Key Findings:There exists a large body of work describing various fall-detection devices. The challenge in this area is to create highly accurate unobtrusive devices. From this review it appears that the technology is becoming more able to accomplish such a task. There is a need now for more real-world tests as well as standardization of the evaluation of these devices.) <|cite_end|>, passive fall detection systems <|cite_start|> (Reference: Falls, Fall Prevention, and Fall Detection Technologies: ) <|cite_end|> <|cite_start|> (Reference: Fall detection devices and their use with older adults: a systematic review: Background:Falls represent a significant threat to the health and independence of adults aged 65 years and older. As a wide variety and large number of passive monitoring systems are currently and increasingly available to detect when individuals have fallen, there is a need to analyze and synthesize the evidence regarding their ability to accurately detect falls to determine which systems are most effective. Objectives:The purpose of this literature review is to systematically assess the current state of design and implementation of fall-detection devices. This review also examines to what extent these devices have been tested in the real world as well as the acceptability of these devices to older adults. Data Sources:A systematic literature review was conducted in PubMed, CINAHL, EMBASE, and PsycINFO from their respective inception dates to June 25, 2013. Study Eligibility Criteria and Interventions:Articles were included if they discussed a project or multiple projects involving a system with the purpose of detecting a fall in adults. It was not a requirement for inclusion in this review that the system targets persons older than 65 years. Articles were excluded if they were not written in English or if they looked at fall risk, fall detection in children, fall prevention, or a personal emergency response device. Study Appraisal and Synthesis Methods:Studies were initially divided into those using sensitivity, specificity, or accuracy in their evaluation methods and those using other methods to evaluate their devices. Studies were further classified into wearable devices and nonwearable devices. Studies were appraised for inclusion of older adults in sample and if evaluation included real-world settings. Results:This review identified 57 projects that used wearable systems and 35 projects using nonwearable systems, regardless of evaluation technique. Nonwearable systems included cameras, motion sensors, microphones, and floor sensors. Of the projects examining wearable systems, only 7.1% reported monitoring older adults in a real-world setting. There were no studies of nonwearable devices that used older adults as subjects in either a laboratory or a real-world setting. In general, older adults appear to be interested in using such devices although they express concerns over privacy and understanding exactly what the device is doing at specific times. Limitations:This systematic review was limited to articles written in English and did not include gray literature. Manual paper screening and review processes may have been subject to interpretive bias. Conclusions and Implications of Key Findings:There exists a large body of work describing various fall-detection devices. The challenge in this area is to create highly accurate unobtrusive devices. From this review it appears that the technology is becoming more able to accomplish such a task. There is a need now for more real-world tests as well as standardization of the evaluation of these devices.) <|cite_end|> <|cite_start|> (Reference: A survey on vision-based fall detection: Falls are a major cause of fatal injury for the elderly population. To improve the quality of living for seniors, a wide range of monitoring systems with fall detection functionality have been proposed over recent years. This article is a survey of systems and algorithms which aim at automatically detecting cases where a human falls and may have been injured. Existing fall detection methods can be categorized as using sensors, or being exclusively vision-based. This literature review focuses on vision-based methods.) <|cite_end|> <|cite_start|> (Reference: Elderly fall detection systems: A literature survey: Falling is among the most damaging event elderly people may experience. With the ever-growing aging population, there is an urgent need for the development of fall detection systems. Thanks to the rapid development of sensor networks and the Internet of Things (IoT), human-computer interaction using sensor fusion has been regarded as an effective method to address the problem of fall detection. In this paper, we provide a literature survey of work conducted on elderly fall detection using sensor networks and IoT. Although there are various existing studies which focus on the fall detection with individual sensors, such as wearable ones and depth cameras, the performance of these systems are still not satisfying as they suffer mostly from high false alarms. Literature shows that fusing the signals of different sensors could result in higher accuracy and lower false alarms, while improving the robustness of such systems. We approach this survey from different perspectives, including data collection, data transmission, sensor fusion, data analysis, security, and privacy. We also review the benchmark data sets available that have been used to quantify the performance of the proposed methods. The survey is meant to provide researchers in the field of elderly fall detection using sensor networks with a summary of progress achieved up to date and to identify areas where further effort would be beneficial.) <|cite_end|> and
mobile robots as fall
detectors <|cite_start|> (Reference: 2020 17th International Conference on Ubiquitous Robots (UR): ) <|cite_end|> <|cite_start|> (Reference: 2020 International Joint Conference on Neural Networks, IJCNN 2020, Glasgow, United Kingdom, July 19-24, 2020: ) <|cite_end|> <|cite_start|> (Reference: 2021 IEEE 4th International Conference on Electronics Technology (ICET): ) <|cite_end|>.
These technologies have complementary strengths and weaknesses, and a hybrid system can be customized based on the user's needs and preferences. We compare these technologies in the related work section (Section~\ref{sec:related-work}).
In this work, we focus using mobile robots as fall detectors. While many mobile robots have been proposed
for fall detection <|cite_start|> (Reference: 2020 17th International Conference on Ubiquitous Robots (UR): ) <|cite_end|> <|cite_start|> (Reference: 2020 International Joint Conference on Neural Networks, IJCNN 2020, Glasgow, United Kingdom, July 19-24, 2020: ) <|cite_end|> <|cite_start|> (Reference: 2021 IEEE 4th International Conference on Electronics Technology (ICET): ) <|cite_end|>,
all of them formulate fall detection as a classification problem.
This requires classifying images as either "fall" or "no fall."
We instead formulate fall detection as an object detection problem.
We note that there have been great improvements in the accuracy of object detection algorithms like YOLOv8 <|cite_start|> (Reference: YOLO Nano: a Highly Compact You Only Look Once Convolutional Neural Network for Object Detection: Object detection remains an active area of research in the field of computer vision, and considerable advances and successes has been achieved in this area through the design of deep convolutional neural networks for tackling object detection. Despite these successes, one of the biggest challenges to widespread deployment of such object detection networks on edge and mobile scenarios is the high computational and memory requirements. As such, there has been growing research interest in the design of efficient deep neural network architectures catered for edge and mobile usage. In this study, we introduce YOLO Nano, a highly compact deep convolutional neural network for the task of object detection. A human-machine collaborative design strategy is leveraged to create YOLO Nano, where principled network design prototyping, based on design principles from the YOLO family of single-shot object detection network architectures, is coupled with machine-driven design exploration to create a compact network with highly customized module-level macroarchitecture and microarchitecture designs tailored for the task of embedded object detection. The proposed YOLO Nano possesses a model size of ~4.0MB (>15.1x and >8.3x smaller than Tiny YOLOv2 and Tiny YOLOv3, respectively) and requires 4.57B operations for inference (>34% and ~17% lower than Tiny YOLOv2 and Tiny YOLOv3, respectively) while still achieving an mAP of ~69.1% on the VOC 2007 dataset (~12% and ~10.7% higher than Tiny YOLOv2 and Tiny YOLOv3, respectively). Experiments on inference speed and power efficiency on a Jetson AGX Xavier embedded module at different power budgets further demonstrate the efficacy of YOLO Nano for embedded scenarios.) <|cite_end|>.
By formulating fall detection as a problem of object detection instead of one of image classification, we can identify multiple persons in the same image and classify each as fallen or not.
This is especially useful when a dummy or statue is present in the same image as the elderly person that the robot is supposed to watch.
Moreover, low-cost design has not been a priority for the above projects.
To address this, we design and prototype a custom low-cost mobile robot designed expressly for the purpose of periodically surveying an area and checking for persons who have fallen.
The main contributions of this project are threefold.
(1) We design and develop a low-cost, open-source mobile robot for indoor autonomous navigation and fall detection. Our robot is similar in cost to the popular robotics platform Turtlebot but differs in that it is equipped with Nvidia Jetson Nano, omnidirectional wheels, and autonomous wireless charging.
(2) We improve the fall detection algorithm from the robot's viewpoint. Our robot camera is installed at a height of 0.15m from the ground to maintain a small form factor, unlike fall detection datasets that are typically collected from a human viewpoint or higher.
Because of this, machine-learning based fall detection and object detection algorithms have lower accuracy when image is taken from a robot's perspective. We compensate for this by computing a homography transformation that adjusts the images from the robot's perspective to match the typical human-height perspective in our dataset. Our experiments demonstrate improved fall detection performance by a factor of 6-12\%.
(3) Lastly, we improve the robot controller by system identification of the motors. We modified the controller by training a model that predicts the motor's rotation velocity using the input Pulse Width Modulation (PWM) signal.
Related Work
\label{sec:related-work}
\subsection{Fall detection using sensors}
In this section, we compare the following fall detection technologies:
user-activated alarm systems <|cite_start|> (Reference: Fall detection devices and their use with older adults: a systematic review: Background:Falls represent a significant threat to the health and independence of adults aged 65 years and older. As a wide variety and large number of passive monitoring systems are currently and increasingly available to detect when individuals have fallen, there is a need to analyze and synthesize the evidence regarding their ability to accurately detect falls to determine which systems are most effective. Objectives:The purpose of this literature review is to systematically assess the current state of design and implementation of fall-detection devices. This review also examines to what extent these devices have been tested in the real world as well as the acceptability of these devices to older adults. Data Sources:A systematic literature review was conducted in PubMed, CINAHL, EMBASE, and PsycINFO from their respective inception dates to June 25, 2013. Study Eligibility Criteria and Interventions:Articles were included if they discussed a project or multiple projects involving a system with the purpose of detecting a fall in adults. It was not a requirement for inclusion in this review that the system targets persons older than 65 years. Articles were excluded if they were not written in English or if they looked at fall risk, fall detection in children, fall prevention, or a personal emergency response device. Study Appraisal and Synthesis Methods:Studies were initially divided into those using sensitivity, specificity, or accuracy in their evaluation methods and those using other methods to evaluate their devices. Studies were further classified into wearable devices and nonwearable devices. Studies were appraised for inclusion of older adults in sample and if evaluation included real-world settings. Results:This review identified 57 projects that used wearable systems and 35 projects using nonwearable systems, regardless of evaluation technique. Nonwearable systems included cameras, motion sensors, microphones, and floor sensors. Of the projects examining wearable systems, only 7.1% reported monitoring older adults in a real-world setting. There were no studies of nonwearable devices that used older adults as subjects in either a laboratory or a real-world setting. In general, older adults appear to be interested in using such devices although they express concerns over privacy and understanding exactly what the device is doing at specific times. Limitations:This systematic review was limited to articles written in English and did not include gray literature. Manual paper screening and review processes may have been subject to interpretive bias. Conclusions and Implications of Key Findings:There exists a large body of work describing various fall-detection devices. The challenge in this area is to create highly accurate unobtrusive devices. From this review it appears that the technology is becoming more able to accomplish such a task. There is a need now for more real-world tests as well as standardization of the evaluation of these devices.) <|cite_end|>, passive fall detection systems <|cite_start|> (Reference: Falls, Fall Prevention, and Fall Detection Technologies: ) <|cite_end|> <|cite_start|> (Reference: Fall detection devices and their use with older adults: a systematic review: Background:Falls represent a significant threat to the health and independence of adults aged 65 years and older. As a wide variety and large number of passive monitoring systems are currently and increasingly available to detect when individuals have fallen, there is a need to analyze and synthesize the evidence regarding their ability to accurately detect falls to determine which systems are most effective. Objectives:The purpose of this literature review is to systematically assess the current state of design and implementation of fall-detection devices. This review also examines to what extent these devices have been tested in the real world as well as the acceptability of these devices to older adults. Data Sources:A systematic literature review was conducted in PubMed, CINAHL, EMBASE, and PsycINFO from their respective inception dates to June 25, 2013. Study Eligibility Criteria and Interventions:Articles were included if they discussed a project or multiple projects involving a system with the purpose of detecting a fall in adults. It was not a requirement for inclusion in this review that the system targets persons older than 65 years. Articles were excluded if they were not written in English or if they looked at fall risk, fall detection in children, fall prevention, or a personal emergency response device. Study Appraisal and Synthesis Methods:Studies were initially divided into those using sensitivity, specificity, or accuracy in their evaluation methods and those using other methods to evaluate their devices. Studies were further classified into wearable devices and nonwearable devices. Studies were appraised for inclusion of older adults in sample and if evaluation included real-world settings. Results:This review identified 57 projects that used wearable systems and 35 projects using nonwearable systems, regardless of evaluation technique. Nonwearable systems included cameras, motion sensors, microphones, and floor sensors. Of the projects examining wearable systems, only 7.1% reported monitoring older adults in a real-world setting. There were no studies of nonwearable devices that used older adults as subjects in either a laboratory or a real-world setting. In general, older adults appear to be interested in using such devices although they express concerns over privacy and understanding exactly what the device is doing at specific times. Limitations:This systematic review was limited to articles written in English and did not include gray literature. Manual paper screening and review processes may have been subject to interpretive bias. Conclusions and Implications of Key Findings:There exists a large body of work describing various fall-detection devices. The challenge in this area is to create highly accurate unobtrusive devices. From this review it appears that the technology is becoming more able to accomplish such a task. There is a need now for more real-world tests as well as standardization of the evaluation of these devices.) <|cite_end|> <|cite_start|> (Reference: A survey on vision-based fall detection: Falls are a major cause of fatal injury for the elderly population. To improve the quality of living for seniors, a wide range of monitoring systems with fall detection functionality have been proposed over recent years. This article is a survey of systems and algorithms which aim at automatically detecting cases where a human falls and may have been injured. Existing fall detection methods can be categorized as using sensors, or being exclusively vision-based. This literature review focuses on vision-based methods.) <|cite_end|> <|cite_start|> (Reference: Elderly fall detection systems: A literature survey: Falling is among the most damaging event elderly people may experience. With the ever-growing aging population, there is an urgent need for the development of fall detection systems. Thanks to the rapid development of sensor networks and the Internet of Things (IoT), human-computer interaction using sensor fusion has been regarded as an effective method to address the problem of fall detection. In this paper, we provide a literature survey of work conducted on elderly fall detection using sensor networks and IoT. Although there are various existing studies which focus on the fall detection with individual sensors, such as wearable ones and depth cameras, the performance of these systems are still not satisfying as they suffer mostly from high false alarms. Literature shows that fusing the signals of different sensors could result in higher accuracy and lower false alarms, while improving the robustness of such systems. We approach this survey from different perspectives, including data collection, data transmission, sensor fusion, data analysis, security, and privacy. We also review the benchmark data sets available that have been used to quantify the performance of the proposed methods. The survey is meant to provide researchers in the field of elderly fall detection using sensor networks with a summary of progress achieved up to date and to identify areas where further effort would be beneficial.) <|cite_end|> and mobile robots as fall detectors <|cite_start|> (Reference: 2020 17th International Conference on Ubiquitous Robots (UR): ) <|cite_end|> <|cite_start|> (Reference: 2020 International Joint Conference on Neural Networks, IJCNN 2020, Glasgow, United Kingdom, July 19-24, 2020: ) <|cite_end|> <|cite_start|> (Reference: 2021 IEEE 4th International Conference on Electronics Technology (ICET): ) <|cite_end|>.
User-activated alarm systems <|cite_start|> (Reference: Fall detection devices and their use with older adults: a systematic review: Background:Falls represent a significant threat to the health and independence of adults aged 65 years and older. As a wide variety and large number of passive monitoring systems are currently and increasingly available to detect when individuals have fallen, there is a need to analyze and synthesize the evidence regarding their ability to accurately detect falls to determine which systems are most effective. Objectives:The purpose of this literature review is to systematically assess the current state of design and implementation of fall-detection devices. This review also examines to what extent these devices have been tested in the real world as well as the acceptability of these devices to older adults. Data Sources:A systematic literature review was conducted in PubMed, CINAHL, EMBASE, and PsycINFO from their respective inception dates to June 25, 2013. Study Eligibility Criteria and Interventions:Articles were included if they discussed a project or multiple projects involving a system with the purpose of detecting a fall in adults. It was not a requirement for inclusion in this review that the system targets persons older than 65 years. Articles were excluded if they were not written in English or if they looked at fall risk, fall detection in children, fall prevention, or a personal emergency response device. Study Appraisal and Synthesis Methods:Studies were initially divided into those using sensitivity, specificity, or accuracy in their evaluation methods and those using other methods to evaluate their devices. Studies were further classified into wearable devices and nonwearable devices. Studies were appraised for inclusion of older adults in sample and if evaluation included real-world settings. Results:This review identified 57 projects that used wearable systems and 35 projects using nonwearable systems, regardless of evaluation technique. Nonwearable systems included cameras, motion sensors, microphones, and floor sensors. Of the projects examining wearable systems, only 7.1% reported monitoring older adults in a real-world setting. There were no studies of nonwearable devices that used older adults as subjects in either a laboratory or a real-world setting. In general, older adults appear to be interested in using such devices although they express concerns over privacy and understanding exactly what the device is doing at specific times. Limitations:This systematic review was limited to articles written in English and did not include gray literature. Manual paper screening and review processes may have been subject to interpretive bias. Conclusions and Implications of Key Findings:There exists a large body of work describing various fall-detection devices. The challenge in this area is to create highly accurate unobtrusive devices. From this review it appears that the technology is becoming more able to accomplish such a task. There is a need now for more real-world tests as well as standardization of the evaluation of these devices.) <|cite_end|> require the person to press a button to request help from community responders. One limitation of this approach is that it requires the fallen person to be conscious and, furthermore, to decide to call for help.
The hesitancy of the elderly to ask for help under such circumstances has pushed researchers to develop passive fall detection systems that do not require any action from the patients themselves <|cite_start|> (Reference: Fall detection devices and their use with older adults: a systematic review: Background:Falls represent a significant threat to the health and independence of adults aged 65 years and older. As a wide variety and large number of passive monitoring systems are currently and increasingly available to detect when individuals have fallen, there is a need to analyze and synthesize the evidence regarding their ability to accurately detect falls to determine which systems are most effective. Objectives:The purpose of this literature review is to systematically assess the current state of design and implementation of fall-detection devices. This review also examines to what extent these devices have been tested in the real world as well as the acceptability of these devices to older adults. Data Sources:A systematic literature review was conducted in PubMed, CINAHL, EMBASE, and PsycINFO from their respective inception dates to June 25, 2013. Study Eligibility Criteria and Interventions:Articles were included if they discussed a project or multiple projects involving a system with the purpose of detecting a fall in adults. It was not a requirement for inclusion in this review that the system targets persons older than 65 years. Articles were excluded if they were not written in English or if they looked at fall risk, fall detection in children, fall prevention, or a personal emergency response device. Study Appraisal and Synthesis Methods:Studies were initially divided into those using sensitivity, specificity, or accuracy in their evaluation methods and those using other methods to evaluate their devices. Studies were further classified into wearable devices and nonwearable devices. Studies were appraised for inclusion of older adults in sample and if evaluation included real-world settings. Results:This review identified 57 projects that used wearable systems and 35 projects using nonwearable systems, regardless of evaluation technique. Nonwearable systems included cameras, motion sensors, microphones, and floor sensors. Of the projects examining wearable systems, only 7.1% reported monitoring older adults in a real-world setting. There were no studies of nonwearable devices that used older adults as subjects in either a laboratory or a real-world setting. In general, older adults appear to be interested in using such devices although they express concerns over privacy and understanding exactly what the device is doing at specific times. Limitations:This systematic review was limited to articles written in English and did not include gray literature. Manual paper screening and review processes may have been subject to interpretive bias. Conclusions and Implications of Key Findings:There exists a large body of work describing various fall-detection devices. The challenge in this area is to create highly accurate unobtrusive devices. From this review it appears that the technology is becoming more able to accomplish such a task. There is a need now for more real-world tests as well as standardization of the evaluation of these devices.) <|cite_end|>
Literature on passive fall detection systems has been reviewed multiple times in the last two decades <|cite_start|> (Reference: Falls, Fall Prevention, and Fall Detection Technologies: ) <|cite_end|> <|cite_start|> (Reference: Fall detection devices and their use with older adults: a systematic review: Background:Falls represent a significant threat to the health and independence of adults aged 65 years and older. As a wide variety and large number of passive monitoring systems are currently and increasingly available to detect when individuals have fallen, there is a need to analyze and synthesize the evidence regarding their ability to accurately detect falls to determine which systems are most effective. Objectives:The purpose of this literature review is to systematically assess the current state of design and implementation of fall-detection devices. This review also examines to what extent these devices have been tested in the real world as well as the acceptability of these devices to older adults. Data Sources:A systematic literature review was conducted in PubMed, CINAHL, EMBASE, and PsycINFO from their respective inception dates to June 25, 2013. Study Eligibility Criteria and Interventions:Articles were included if they discussed a project or multiple projects involving a system with the purpose of detecting a fall in adults. It was not a requirement for inclusion in this review that the system targets persons older than 65 years. Articles were excluded if they were not written in English or if they looked at fall risk, fall detection in children, fall prevention, or a personal emergency response device. Study Appraisal and Synthesis Methods:Studies were initially divided into those using sensitivity, specificity, or accuracy in their evaluation methods and those using other methods to evaluate their devices. Studies were further classified into wearable devices and nonwearable devices. Studies were appraised for inclusion of older adults in sample and if evaluation included real-world settings. Results:This review identified 57 projects that used wearable systems and 35 projects using nonwearable systems, regardless of evaluation technique. Nonwearable systems included cameras, motion sensors, microphones, and floor sensors. Of the projects examining wearable systems, only 7.1% reported monitoring older adults in a real-world setting. There were no studies of nonwearable devices that used older adults as subjects in either a laboratory or a real-world setting. In general, older adults appear to be interested in using such devices although they express concerns over privacy and understanding exactly what the device is doing at specific times. Limitations:This systematic review was limited to articles written in English and did not include gray literature. Manual paper screening and review processes may have been subject to interpretive bias. Conclusions and Implications of Key Findings:There exists a large body of work describing various fall-detection devices. The challenge in this area is to create highly accurate unobtrusive devices. From this review it appears that the technology is becoming more able to accomplish such a task. There is a need now for more real-world tests as well as standardization of the evaluation of these devices.) <|cite_end|> <|cite_start|> (Reference: A survey on vision-based fall detection: Falls are a major cause of fatal injury for the elderly population. To improve the quality of living for seniors, a wide range of monitoring systems with fall detection functionality have been proposed over recent years. This article is a survey of systems and algorithms which aim at automatically detecting cases where a human falls and may have been injured. Existing fall detection methods can be categorized as using sensors, or being exclusively vision-based. This literature review focuses on vision-based methods.) <|cite_end|> <|cite_start|> (Reference: Elderly fall detection systems: A literature survey: Falling is among the most damaging event elderly people may experience. With the ever-growing aging population, there is an urgent need for the development of fall detection systems. Thanks to the rapid development of sensor networks and the Internet of Things (IoT), human-computer interaction using sensor fusion has been regarded as an effective method to address the problem of fall detection. In this paper, we provide a literature survey of work conducted on elderly fall detection using sensor networks and IoT. Although there are various existing studies which focus on the fall detection with individual sensors, such as wearable ones and depth cameras, the performance of these systems are still not satisfying as they suffer mostly from high false alarms. Literature shows that fusing the signals of different sensors could result in higher accuracy and lower false alarms, while improving the robustness of such systems. We approach this survey from different perspectives, including data collection, data transmission, sensor fusion, data analysis, security, and privacy. We also review the benchmark data sets available that have been used to quantify the performance of the proposed methods. The survey is meant to provide researchers in the field of elderly fall detection using sensor networks with a summary of progress achieved up to date and to identify areas where further effort would be beneficial.) <|cite_end|>.
These fall detectors can be classified based on the location of sensors as \emph{wearable} or \emph{ambient}.
A \emph{wearable} fall detector is worn by the person to be tracked.
It typically uses IMU (Inertial Measurement Units) and health sensors to detect falls.
An \emph{ambient} fall detection technology is usually installed in the person's home.
These systems typically use pressure sensors, vibration sensors, or cameras to detect a fall and alert the caregivers.
While wearable sensors have become less conspicuous, they can easily lead to false alarms.
On the other hand, ambient sensors, while more accurate, are more costly.
Admittedly, these sensors can be combined to complement each other. One limitation of all camera-based (vision-based) systems is that some consider them to invade the privacy of the individual being monitored.
These limitations, combined with advances in deep learning and autonomous robots, have led to a new type of ambient fall detectors: mobile robots as fall detectors <|cite_start|> (Reference: 2020 17th International Conference on Ubiquitous Robots (UR): ) <|cite_end|> <|cite_start|> (Reference: 2020 International Joint Conference on Neural Networks, IJCNN 2020, Glasgow, United Kingdom, July 19-24, 2020: ) <|cite_end|> <|cite_start|> (Reference: A real-time fall detection system in elderly care using mobile robot and kinect sensor: —The growing population of elderly people especially in developed countries motivates the researchers to develop healthcare systems to ensure the safety of elderly people at home. On the other hand, mobile robots can provide an efficient solution to healthcare problem. Moreover, using new technologies such as the Kinect sensor with robotics could bring new ways to build intelligent systems that could use to monitor the elderly people, and raise an alarm in case of dangerous events, such as falling down, are detected. Falls and their consequences are a major risk especially for elderly people who live alone where immediate assistance is needed. In this work, the Kinect sensor is used to introduce a mobile robot system to follow a person and detect when the target person has fallen. In addition, the mobile robot is provided with a cell phone that is used to send an SMS message notification and make an emergency call when a fall is detected.) <|cite_end|> <|cite_start|> (Reference: 2021 IEEE 4th International Conference on Electronics Technology (ICET): ) <|cite_end|>. These are ambient-mobile sensors, contrasting with the previous generation of ambient-static sensors.
While most mobile robots do depend on camera-based fall detection, they may be preferred by some users over ambient camera-based systems. Because mobile robots perform only periodic monitoring (as opposed to ambient systems' continuous monitoring), mobile robots are not all-seeing at all times.
A mobile robot system can be configured to check on a person at regular intervals, such as every 30 minutes, and be triggered by wearable sensors or loud noises.
The mobile robot can then search for the person in the house. If the person is found and identified as being in a fallen position, then an alert is generated.
A trigger is generated if a room or a bedroom is found inaccessible.
The advantage of this approach over ambient cameras is that a mobile robot checking on an elderly person is periodic, and the person being checked on is reminded of the robot's presence. The requirement to keep the floor clear for the robot's passage can also help prevent falls.
\subsection{Low cost mobile robots}
Extensive research has been carried out in the field of intelligent autonomous mobile robots to develop advanced features such as obstacle avoidance, object detection, path planning, and map creation. In one such effort, Andruino-R2, a mobile robot, was developed and implemented by <|cite_start|> (Reference: An Android and Arduino Based Low-Cost Educational Robot with Applied Intelligent Control and Machine Learning: Applied Science requires testbeds to carry out experiments and validate in practice the results of the application of the methods. This article presents a low-cost (35–40 euros) educational mobile robot, based on Android and Arduino, integrated with Robot Operating System (ROS), together with its application for learning and teaching in the domain of intelligent automatic control, computer vision and Machine Learning. Specifically, the practical application to visual path tracking integrated with a Fuzzy Collision Risk system, that avoids collision with obstacles ahead, is shown. Likewise, a Wi-Fi positioning system is presented, which allows identifying in which room the robot is located, based on self-collected data and Machine Learning.) <|cite_end|> with line-following navigation combining Arduino and Java-based ROS. <|cite_start|> (Reference: Development of e-butler: Introduction of robot system in hospitality with mobile application: ) <|cite_end|> developed an integrated system named BeeButler that combines a flutter-based multi-platform mobile app and a robot to aid hotel guests in ordering amenities. It solves the previous problem of external devices by using onboard Jetson Nano and Arduino Mega 2560 to control the robot. Still, it uses a line follower and RFID tag for localization, which is unsuitable for navigating an unknown environment without setting additional lines and RFID tags.
A two-wheel home-assistive robot featuring a combination of omnidirectional wheels with differential driving was developed by author <|cite_start|> (Reference: 学界情報 国際会議レポート:The 32nd International Symposium on Industrial Electronics (ISIE) June 19-21, 2023, Helsinki-Espoo, Finland: ) <|cite_end|>. The author used an STM32 microcontroller and NVIDIA Jetson Nano to control the robot and demonstrated successful 2D and 3D SLAM experiments, autonomous navigation, and obstacle avoidance. The author <|cite_start|> (Reference: 5th International Conference on Vision, Image and Signal Processing, ICVISP 2021, Kuala Lumpur, Malaysia, December 18-20, 2021: ) <|cite_end|> developed a ROS-based omnidirectional robot with mapping, localization, and navigation capabilities using a multi-sensor fusion on Raspberry Pi 4B, emphasizing accuracy and efficiency. <|paper_end|> | [
"<|reference_start|> 2020 International Joint Conference on Neural Networks, IJCNN 2020, Glasgow, United Kingdom, July 19-24, 2020: <|reference_end|>",
"<|reference_start|> A survey on vision-based fall detection: Falls are a major cause of fatal injury for the elderly population. To improve the quality of living for seniors, a wide range of monitoring systems with fall detection functionality have been proposed over recent years. This article is a survey of systems and algorithms which aim at automatically detecting cases where a human falls and may have been injured. Existing fall detection methods can be categorized as using sensors, or being exclusively vision-based. This literature review focuses on vision-based methods. <|reference_end|>",
"<|reference_start|> 2021 IEEE 4th International Conference on Electronics Technology (ICET): <|reference_end|>",
"<|reference_start|> 学界情報 国際会議レポート:The 32nd International Symposium on Industrial Electronics (ISIE) June 19-21, 2023, Helsinki-Espoo, Finland: <|reference_end|>"
] | [
23,
29,
34,
37
] | {"<|multi_cite_1_1|>": "ss-2449001", "<|multi_cite_1_2|>": "ss-2449002", "<|cite_2|>": "ss-2449003", "<|cite_3|>": "ss-2449004", "<|cite_4|>": "ss-2449002", "<|cite_5|>": "ss-2449005", "<|multi_cite_6_1|>": "ss-2449004", "<|multi_cite_6_2|>": "ss-2449005", "<|multi_cite_6_3|>": "ss-2449006", "<|multi_cite_6_4|>": "ss-759848", "<|multi_cite_7_1|>": "ss-1968002", "<|multi_cite_7_2|>": "ss-931713", "<|multi_cite_7_3|>": "ss-2449007", "<|multi_cite_8_1|>": "ss-1968002", "<|multi_cite_8_2|>": "ss-931713", "<|multi_cite_8_3|>": "ss-2449007", "<|cite_9|>": "arxiv-226914", "<|cite_10|>": "ss-2449005", "<|multi_cite_11_1|>": "ss-2449004", "<|multi_cite_11_2|>": "ss-2449005", "<|multi_cite_11_3|>": "ss-2449006", "<|multi_cite_11_4|>": "ss-759848", "<|multi_cite_12_1|>": "ss-1968002", "<|multi_cite_12_2|>": "ss-931713", "<|multi_cite_12_3|>": "ss-2449007", "<|cite_13|>": "ss-2449005", "<|cite_14|>": "ss-2449005", "<|multi_cite_15_1|>": "ss-2449004", "<|multi_cite_15_2|>": "ss-2449005", "<|multi_cite_15_3|>": "ss-2449006", "<|multi_cite_15_4|>": "ss-759848", "<|multi_cite_16_1|>": "ss-1968002", "<|multi_cite_16_2|>": "ss-931713", "<|multi_cite_16_3|>": "ss-2449008", "<|multi_cite_16_5|>": "ss-2449007", "<|cite_17|>": "ss-2449009", "<|cite_18|>": "ss-2449010", "<|cite_19|>": "ss-2449011", "<|cite_20|>": "ss-2449012"} |
2111.14479 | <|paper_start|> Title: Mixed Precision DNN Qunatization for Overlapped Speech Separation and Recognition
Abstract: Mixed Precision DNN Qunatization for Overlapped Speech Separation and Recognition: Recognition of overlapped speech has been a highly challenging task to date. State-of-the-art multi-channel speech separation system are becoming increasingly complex and expensive for practical applications. To this end, low-bit neural network quantization provides a powerful solution to dramatically reduce their model size. However, current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different model components to quantization errors. In this paper, novel mixed precision DNN quantization methods are proposed by applying locally variable bit-widths to individual TCN components of a TF masking based multi-channel speech separation system. The optimal local precision settings are automatically learned using three techniques. The first two approaches utilize quantization sensitivity metrics based on either the mean square error (MSE) loss function curvature, or the KL-divergence measured between full precision and quantized separation models. The third approach is based on mixed precision neural architecture search. Experiments conducted on the LRS3-TED corpus simulated overlapped speech data suggest that the proposed mixed precision quantization techniques consistently outperform the uniform precision baseline speech separation systems of comparable bit-widths in terms of SI-SNR and PESQ scores as well as word error rate (WER) reductions up to 2.88% absolute (8% relative).
Introduction
\label{sec:intro}
\vspace{-0.5ex}
Despite the rapid progress of automatic speech recognition (ASR) in the past few decades, accurate recognition of overlapped speech remains a highly challenging task to date. To this end, microphone arrays and the required multi-channel signal integration technologies represented by TF masking <|cite_start|> (Reference: A comprehensive study of speech separation: spectrogram vs waveform separation: Speech separation has been studied widely for single-channel close-talk microphone recordings over the past few years; developed solutions are mostly in frequency-domain. Recently, a raw audio waveform separation network (TasNet) is introduced for single-channel data, with achieving high Si-SNR (scale-invariant source-to-noise ratio) and SDR (source-to-distortion ratio) comparing against the state-of-the-art solution in frequency-domain. In this study, we incorporate effective components of the TasNet into a frequency-domain separation method. We compare both for alternative scenarios. We introduce a solution for directly optimizing the separation criterion in frequency-domain networks. In addition to speech separation objective and subjective measurements, we evaluate the separation performance on a speech recognition task as well. We study the speech separation problem for far-field data (more similar to naturalistic audio streams) and develop multi-channel solutions for both frequency and time-domain separators with utilizing spectral, spatial and speaker location information. For our experiments, we simulated multi-channel spatialized reverberate WSJ0-2mix dataset. Our experimental results show that spectrogram separation can achieve competitive performance with better network design. Multi-channel framework as well is shown to improve the single-channel performance relatively up to +35.5% and +46% in terms of WER and SDR, respectively.) <|cite_end|> <|cite_start|> (Reference: MULTI-BAND PIT AND MODEL INTEGRATION FOR IMPROVED MULTI-CHANNEL SPEECH SEPARATION: The recent exploration of deep learning for supervised speech separation has significantly accelerated the progress on the multi-talker speech separation problem. Multi-channel extension has attracted much research attention due to the benefit of spatial information in far-field acoustic environments. In this paper, We review the most recent models of multi-channel permutation invariant training (PIT), investigate spatial features formed by microphone pairs and their underlying impact and issue, present a multi-band architecture for effective feature encoding, and conduct a model integration between single-channel and multi-channel PIT for resolving the spatial overlapping problem in the conventional multi-channel PIT framework. The evaluation confirms the significant improvement achieved with the proposed model and training approach for the multi-channel speech separation.) <|cite_end|>, delay and sum <|cite_start|> (Reference: Beamforming: a versatile approach to spatial filtering: An overview of beamforming from a signal-processing perspective is provided, with an emphasis on recent research. Data-independent, statistically optimum, adaptive, and partially adaptive beamforming are discussed. Basic notation, terminology, and concepts are included. Several beamformer implementations are briefly described.<<ETX>>) <|cite_end|> <|cite_start|> (Reference: Acoustic Beamforming for Speaker Diarization of Meetings: When performing speaker diarization on recordings from meetings, multiple microphones of different qualities are usually available and distributed around the meeting room. Although several approaches have been proposed in recent years to take advantage of multiple microphones, they are either too computationally expensive and not easily scalable or they cannot outperform the simpler case of using the best single microphone. In this paper, the use of classic acoustic beamforming techniques is proposed together with several novel algorithms to create a complete frontend for speaker diarization in the meeting room domain. New techniques we are presenting include blind reference-channel selection, two-step time delay of arrival (TDOA) Viterbi postprocessing, and a dynamic output signal weighting algorithm, together with using such TDOA values in the diarization to complement the acoustic information. Tests on speaker diarization show a 25% relative improvement on the test set compared to using a single most centrally located microphone. Additional experimental results show improvements using these techniques in a speech recognition task.) <|cite_end|> and minimum variance distortionless response (MVDR) <|cite_start|> (Reference: An iterative algorithm for the computation of the MVDR filter: Statistical conditional optimization criteria lead to the development of an iterative algorithm that starts from the matched filter (or constraint vector) and generates a sequence of filters that converges to the minimum-variance-distortionless-response (MVDR) solution for any positive definite input autocorrelation matrix. Computationally, the algorithm is a simple, noninvasive, recursive procedure that avoids any form of explicit autocorrelation matrix inversion, decomposition, or diagonalization. Theoretical analysis reveals basic properties of the algorithm and establishes formal convergence. When the input autocorrelation matrix is replaced by a conventional sample-average (positive definite) estimate, the algorithm effectively generates a sequence of MVDR filter estimators; the bias converges rapidly to zero and the covariance trace rises slowly and asymptotically to the covariance trace of the familiar sample-matrix-inversion (SMI) estimator. In fact, formal convergence of the estimator sequence to the SMI estimate is established. However, for short data records, it is the early, nonasymptotic elements of the generated sequence of estimators that offer favorable bias covariance balance and are seen to outperform in mean-square estimation error, constraint-LMS, RLS-type, orthogonal multistage decomposition, as well as plain and diagonally loaded SMI estimates. An illustrative interference suppression example is followed throughout this presentation.) <|cite_end|> <|cite_start|> (Reference: On optimal frequency-domain multichannel linear filtering for noise reduction: Several contributions have been made so far to develop optimal multichannel linear filtering approaches and show their ability to reduce the acoustic noise. However, there has not been a clear unifying theoretical analysis of their performance in terms of both noise reduction and speech distortion. To fill this gap, we analyze the frequency-domain (non-causal) multichannel linear filtering for noise reduction in this paper. For completeness, we consider the noise reduction constrained optimization problem that leads to the parameterized multichannel non-causal Wiener filter (PMWF). Our contribution is fivefold. First, we formally show that the minimum variance distortionless response (MVDR) filter is a particular case of the PMWF by properly formulating the constrained optimization problem of noise reduction. Second, we propose new simplified expressions for the PMWF, the MVDR, and the generalized sidelobe canceller (GSC) that depend on the signals' statistics only. In contrast to earlier works, these expressions are explicitly independent of the channel transfer function ratios. Third, we quantify the theoretical gains and losses in terms of speech distortion and noise reduction when using the PWMF by establishing new simplified closed-form expressions for three performance measures, namely, the signal distortion index, the noise reduction factor (originally proposed in the paper titled ldquoNew insights into the noise reduction Wiener filter,rdquo by J. Chen (IEEE Transactions on Audio, Speech, and Language Processing, Vol. 15, no. 4, pp. 1218-1234, Jul. 2006) to analyze the single channel time-domain Wiener filter), and the output signal-to-noise ratio (SNR). Fourth, we analyze the effects of coherent and incoherent noise in addition to the benefits of utilizing multiple microphones. Fifth, we propose a new proof for the a posteriori SNR improvement achieved by the PMWF. Finally, we provide some simulations results to corroborate the findings of this work.) <|cite_end|> play a key role in state-of-the-art overlapped speech separation and recognition systems. With the wider application of deep learning based speech technologies, these speech separation methods have evolved and been integrated into a variety of DNN based designs based on, for example, convolutional time-domain audio separation networks (Conv-TasNets) <|cite_start|> (Reference: {Conv-tasnet: Surpassing ideal time--frequency magnitude masking for speech separation: Single-channel, speaker-independent speech separation methods have recently seen great progress. However, the accuracy, latency, and computational cost of such methods remain insufficient. The majority of the previous methods have formulated the separation problem through the time–frequency representation of the mixed signal, which has several drawbacks, including the decoupling of the phase and magnitude of the signal, the suboptimality of time–frequency representation for speech separation, and the long latency in calculating the spectrograms. To address these shortcomings, we propose a fully convolutional time-domain audio separation network (Conv-TasNet), a deep learning framework for end-to-end time-domain speech separation. Conv-TasNet uses a linear encoder to generate a representation of the speech waveform optimized for separating individual speakers. Speaker separation is achieved by applying a set of weighting functions (masks) to the encoder output. The modified encoder representations are then inverted back to the waveforms using a linear decoder. The masks are found using a temporal convolutional network consisting of stacked one-dimensional dilated convolutional blocks, which allows the network to model the long-term dependencies of the speech signal while maintaining a small model size. The proposed Conv-TasNet system significantly outperforms previous time–frequency masking methods in separating two- and three-speaker mixtures. Additionally, Conv-TasNet surpasses several ideal time–frequency magnitude masks in two-speaker speech separation as evaluated by both objective distortion measures and subjective quality assessment by human listeners. Finally, Conv-TasNet has a significantly smaller model size and a shorter minimum latency, making it a suitable solution for both offline and real-time speech separation applications. This study, therefore, represents a major step toward the realization of speech separation systems for real-world speech processing technologies.) <|cite_end|>, dual path recurrent neural networks and transformers <|cite_start|> (Reference: Dual-path RNN: efficient long sequence modeling for time-domain single-channel speech separation: Recent studies in deep learning-based speech separation have proven the superiority of time-domain approaches to conventional time-frequency-based methods. Unlike the time-frequency domain approaches, the time-domain separation systems often receive input sequences consisting of a huge number of time steps, which introduces challenges for modeling extremely long sequences. Conventional recurrent neural networks (RNNs) are not effective for modeling such long sequences due to optimization difficulties, while one-dimensional convolutional neural networks (1-D CNNs) cannot perform utterance-level sequence modeling when its receptive field is smaller than the sequence length. In this paper, we propose dual-path recurrent neural network (DPRNN), a simple yet effective method for organizing RNN layers in a deep structure to model extremely long sequences. DPRNN splits the long sequential input into smaller chunks and applies intra- and inter-chunk operations iteratively, where the input length can be made proportional to the square root of the original sequence length in each operation. Experiments show that by replacing 1-D CNN with DPRNN and apply sample-level modeling in the time-domain audio separation network (TasNet), a new state-of-the-art performance on WSJ0-2mix is achieved with a 20 times smaller model than the previous best system.) <|cite_end|> <|cite_start|> (Reference: Dual-Path Modeling for Long Recording Speech Separation in Meetings: The continuous speech separation (CSS) is a task to separate the speech sources from a long, partially overlapped recording, which involves a varying number of speakers. A straightforward extension of conventional utterance-level speech separation to the CSS task is to segment the long recording with a size-fixed window and process each window separately. Though effective, this extension fails to model the long dependency in speech and thus leads to sub-optimum performance. The recent proposed dual-path modeling could be a remedy to this problem, thanks to its capability in jointly modeling the cross-window dependency and the local-window processing. In this work, we further extend the dual-path modeling framework for CSS task. A transformer-based dual-path system is proposed, which integrates transform layers for global modeling. The proposed models are applied to LibriCSS, a real recorded multi-talk dataset, and consistent WER reduction can be observed in the ASR evaluation for separated speech. Also, a dual-path transformer equipped with convolutional layers is proposed. It significantly reduces the computation amount by 30% with better WER evaluation. Furthermore, the online processing dual-path models are investigated, which shows 10% relative WER reduction compared to the baseline.) <|cite_end|>
. State-of-the-art speech separation performance require increasingly complex neural architecture designs.
For example, the audio-only and audio-visual speech separation systems introduced in <|cite_start|> (Reference: Multi-modal Multi-channel Target Speech Separation: Target speech separation refers to extracting a target speaker's voice from an overlapped audio of simultaneous talkers. Previously the use of visual modality for target speech separation has demonstrated great potentials. This work proposes a general multi-modal framework for target speech separation by utilizing all the available information of the target speaker, including his/her spatial location, voice characteristics and lip movements. Also, under this framework, we investigate on the fusion methods for multi-modal joint modeling. A factorized attention-based fusion method is proposed to aggregate the high-level semantic information of multi-modalities at embedding level. This method firstly factorizes the mixture audio into a set of acoustic subspaces, then leverages the target's information from other modalities to enhance these subspace acoustic embeddings with a learnable attention scheme. To validate the robustness of proposed multi-modal separation model in practical scenarios, the system was evaluated under the condition that one of the modalities is temporarily missing, invalid or corrupted. Experiments are conducted on a large-scale audio-visual dataset collected from YouTube (to be released) that spatialized by simulated room impulse responses (RIRs). Experiment results illustrate that our proposed multi-modal framework significantly outperforms single-modal and bi-modal speech separation approaches, while can still support real-time processing.) <|cite_end|> contain 9.6 and 22 million parameters in total respectively.
However, this not only lead to a large increase in their overall memory footprint and computational cost when operating on the cloud, but also creates difficulty when deployed on edge devices to enhance privacy and reduce latency.
To this end, one efficient and powerful solution is to use low-bit deep neural network (DNN) quantization techniques <|cite_start|> (Reference: Neural Network Language Model Compression With Product Quantization and Soft Binarization: Large memory consumption of the neural network language models (NN LMs) prohibits their use in many resource-constrained scenarios. Hence, effective NN LM compression approaches that are independent of NN structures are of great interest. However, previous approaches usually achieve a high compression ratio at the cost of obvious performance loss. In this paper, two recently proposed quantization approaches, product quantization (PQ) and soft binarization are effectively combined to address the issue. PQ decomposes word embedding matrices into a Cartesian product of low dimensional subspaces and quantizes each subspace separately. Soft binarization uses a small number of float scalars and the knowledge distillation technique to recover the performance loss during the binarization. Experiments show that the proposed approaches can achieve a high compression ratio, from 70 to over 100, while still maintaining comparable performance to the uncompressed NN LM on both PPL and word error rate criteria.) <|cite_end|> <|cite_start|> (Reference: Binary neural networks for speech recognition: Recently, deep neural networks (DNNs) significantly outperform Gaussian mixture models in acoustic modeling for speech recognition. However, the substantial increase in computational load during the inference stage makes deep models difficult to directly deploy on low-power embedded devices. To alleviate this issue, structure sparseness and low precision fixed-point quantization have been applied widely. In this work, binary neural networks for speech recognition are developed to reduce the computational cost during the inference stage. A fast implementation of binary matrix multiplication is introduced. On modern central processing unit (CPU) and graphics processing unit (GPU) architectures, a 5–7 times speedup compared with full precision floatingpoint matrix multiplication can be achieved in real applications. Several kinds of binary neural networks and related model optimization algorithms are developed for large vocabulary continuous speech recognition acoustic modeling. In addition, to improve the accuracy of binary models, knowledge distillation from the normal full precision floating-point model to the compressed binary model is explored. Experiments on the standard Switchboard speech recognition task show that the proposed binary neural networks can deliver 3–4 times speedup over the normal full precision deep models. With the knowledge distillation from the normal floating-point models, the binary DNNs or binary convolutional neural networks (CNNs) can restrict the word error rate (WER) degradation to within 15.0%, compared to the normal full precision floating-point DNNs or CNNs, respectively. Particularly for the binary CNN with binarization only on the convolutional layers, the WER degradation is very small and is almost negligible with the proposed approach.) <|cite_end|> <|cite_start|> (Reference: Extremely Low Bit Neural Network: Squeeze the Last Bit Out with ADMM: Although deep learning models are highly effective for various learning tasks, their high computational costs prohibit the deployment to scenarios where either memory or computational resources are limited. In this paper, we focus on compressing and accelerating deep models with network weights represented by very small numbers of bits, referred to as extremely low bit neural network. We model this problem as a discretely constrained optimization problem. Borrowing the idea from Alternating Direction Method of Multipliers (ADMM), we decouple the continuous parameters from the discrete constraints of network, and cast the original hard problem into several subproblems. We propose to solve these subproblems using extragradient and iterative quantization algorithms that lead to considerably faster convergency compared to conventional optimization methods. Extensive experiments on image recognition and object detection verify that the proposed algorithm is more effective than state-of-the-art approaches when coming to extremely low bit neural network.) <|cite_end|> <|cite_start|> (Reference: Highly efficient neural network language model compression using soft binarization training: The long short-term memory language model (LSTM LM) has been widely investigated in large vocabulary continuous speech recognition (LVCSR) task. Despite the excellent performance of LSTM LM, its usage in resource-constrained environments, such as portable devices, is limited due to the high consumption of memory. Binarized language model has been proposed to achieve significant memory reduction at the cost of performance degradation at high compression ratio. In this paper, we propose a soft binarization approach to recover the performance of binarized LSTM LM. Experiments show that the proposed method can achieve a high compression rate of 30 × with almost no performance loss in both language modeling and speech recognition tasks.) <|cite_end|>, which has drawn increasing interest in the machine learning and speech technology community in recent years. By replacing floating point weights with low precision values, the resulting quantization methods can significantly reduce the model size and inference time without modifying the model architectures. Traditional DNN quantization approaches <|cite_start|> (Reference: BinaryConnect: Training Deep Neural Networks with binary weights during propagations: Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN.) <|cite_end|> <|cite_start|> (Reference: Low-bit Quantization of Recurrent Neural Network Language Models Using Alternating Direction Methods of Multipliers: The high memory consumption and computational costs of Recurrent neural network language models (RNNLMs) limit their wider application on resource constrained devices. In recent years, neural network quantization techniques that are capable of producing extremely low-bit compression, for example, binarized RNNLMs, are gaining increasing research interests. Directly training of quantized neural networks is difficult. By formulating quantized RNNLMs training as an optimization problem, this paper presents a novel method to train quantized RNNLMs from scratch using alternating direction methods of multipliers (ADMM). This method can also flexibly adjust the trade-off between the compression rate and model performance using tied low-bit quantization tables. Experiments on two tasks: Penn Treebank (PTB), and Switchboard (SWBD) suggest the proposed ADMM quantization achieved a model size compression factor of up to 31 times over the full precision baseline RNNLMs. Faster convergence of 5 times in model training over the baseline binarized RNNLM quantization was also obtained. Index Terms: Language models, Recurrent neural networks, Quantization, Alternating direction methods of multipliers.) <|cite_end|> <|cite_start|> (Reference: 4-bit Quantization of LSTM-based Speech Recognition Models: We investigate the impact of aggressive low-precision representations of weights and activations in two families of large LSTM-based architectures for Automatic Speech Recognition (ASR): hybrid Deep Bidirectional LSTM - Hidden Markov Models (DBLSTM-HMMs) and Recurrent Neural Network - Transducers (RNN-Ts). Using a 4-bit integer representation, a na\"ive quantization approach applied to the LSTM portion of these models results in significant Word Error Rate (WER) degradation. On the other hand, we show that minimal accuracy loss is achievable with an appropriate choice of quantizers and initializations. In particular, we customize quantization schemes depending on the local properties of the network, improving recognition performance while limiting computational time. We demonstrate our solution on the Switchboard (SWB) and CallHome (CH) test sets of the NIST Hub5-2000 evaluation. DBLSTM-HMMs trained with 300 or 2000 hours of SWB data achieves $<$0.5% and $<$1% average WER degradation, respectively. On the more challenging RNN-T models, our quantization strategy limits degradation in 4-bit inference to 1.3%.) <|cite_end|> are predominantly based on uniform precision, where a manually defined identical bit-width is applied to all weight parameters during quantization. This fails to account for the varying performance sensitivity at different parts of the system to quantization errors. In practice, this often leads to large performance degradation against full precision models.
In order to address the above issue, novel mixed precision DNN quantization methods are proposed in this paper to address this problem by applying locally variable bit-widths settings to individual TCN components of a TF masking based multi-channel speech separation system <|cite_start|> (Reference: Audio-visual multi-channel integration and recognition of overlapped speech: Automatic speech recognition (ASR) technologies have been significantly advanced in the past few decades. However, recognition of overlapped speech remains a highly challenging task to date. To this end, multi-channel microphone array data are widely used in current ASR systems. Motivated by the invariance of visual modality to acoustic signal corruption and the additional cues they provide to separate the target speaker from the interfering sound sources, this paper presents an audio-visual multi-channel based recognition system for overlapped speech. It benefits from a tight integration between a speech separation front-end and recognition back-end, both of which incorporate additional video input. A series of audio-visual multi-channel speech separation front-end components based on TF masking, Filter&Sum and mask-based MVDR neural channel integration approaches are developed. To reduce the error cost mismatch between the separation and the recognition components, the entire system is jointly fine-tuned using a multi-task criterion interpolation of the scale-invariant signal to noise ratio (Si-SNR) with either the connectionist temporal classification (CTC), or lattice-free maximum mutual information (LF-MMI) loss function. Experiments suggest that: the proposed audio-visual multi-channel recognition system outperforms the baseline audio-only multi-channel ASR system by up to 8.04% (31.68% relative) and 22.86% (58.51% relative) absolute WER reduction on overlapped speech constructed using either simulation or replaying of the LRS2 dataset respectively. Consistent performance improvements are also obtained using the proposed audio-visual multi-channel recognition system when using occluded video input with the lip region randomly covered up to 60%.) <|cite_end|>. These methods are becoming well supported by the recent development of mixed precision DNN acceleration hardware that allow multiple locally set precision settings to be used <|cite_start|> (Reference: HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks: Quantization is an effective method for reducing memory footprint and inference time of Neural Networks, e.g., for efficient inference in the cloud, especially at the edge. However, ultra low precision quantization could lead to significant degradation in model generalization. A promising method to address this is to perform mixed-precision quantization, where more sensitive layers are kept at higher precision. However, the search space for a mixed-precision quantization is exponential in the number of layers. Recent work has proposed HAWQ, a novel Hessian based framework, with the aim of reducing this exponential search space by using second-order information. While promising, this prior work has three major limitations: (i) HAWQV1 only uses the top Hessian eigenvalue as a measure of sensitivity and do not consider the rest of the Hessian spectrum; (ii) HAWQV1 approach only provides relative sensitivity of different layers and therefore requires a manual selection of the mixed-precision setting; and (iii) HAWQV1 does not consider mixed-precision activation quantization. Here, we present HAWQV2 which addresses these shortcomings. For (i), we perform a theoretical analysis showing that a better sensitivity metric is to compute the average of all of the Hessian eigenvalues. For (ii), we develop a Pareto frontier based method for selecting the exact bit precision of different layers without any manual selection. For (iii), we extend the Hessian analysis to mixed-precision activation quantization. We have found this to be very beneficial for object detection. We show that HAWQV2 achieves new state-of-the-art results for a wide range of tasks.) <|cite_end|>. The resulting flexibility can provide a better trade-off between compression ratio and accuracy performance target. The optimal local precision settings are automatically learned using three techniques. The first two approaches utilize quantization sensitivity metrics based on either the mean square error (MSE) loss function curvature that can be approximated efficiently via matrix free techniques, or the KL-divergence measured between full precision and quantized separation models. The third approach is based on mixed precision neural architecture search.
Experiments conducted on the Lip Reading Sentences based on TED videos (LRS3-TED) corpus <|cite_start|> (Reference: Deep Audio-Visual Speech Recognition: The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release a new dataset for audio-visual speech recognition, LRS2-BBC, consisting of thousands of natural sentences from British television. The models that we train surpass the performance of all previous work on a lip reading benchmark dataset by a significant margin.) <|cite_end|> simulated overlapped speech data suggest that the proposed mixed precision quantization techniques consistently outperform the uniform precision baseline speech separation systems of comparable quantization bit-widths. Consistent performance improvements in terms of both SI-SNR and PESQ based speech enhancement metrics and speech recognition word error rate (WER) up to 2.88\% absolute (8\% relative) were obtained. The 8-bit KL mixed precision quantized system achieved a “lossless” quantization over the full precision 32-bit baseline while incurring no statistically significant WER increase.
The main contributions of this paper are summarized as follows. First, this paper is the first work to apply mixed precision quantization methods to speech separation tasks. In contrast, previous researches on low-bit quantization within the speech community largely focused on the back-end recognition system <|cite_start|> (Reference: 4-bit Quantization of LSTM-based Speech Recognition Models: We investigate the impact of aggressive low-precision representations of weights and activations in two families of large LSTM-based architectures for Automatic Speech Recognition (ASR): hybrid Deep Bidirectional LSTM - Hidden Markov Models (DBLSTM-HMMs) and Recurrent Neural Network - Transducers (RNN-Ts). Using a 4-bit integer representation, a na\"ive quantization approach applied to the LSTM portion of these models results in significant Word Error Rate (WER) degradation. On the other hand, we show that minimal accuracy loss is achievable with an appropriate choice of quantizers and initializations. In particular, we customize quantization schemes depending on the local properties of the network, improving recognition performance while limiting computational time. We demonstrate our solution on the Switchboard (SWB) and CallHome (CH) test sets of the NIST Hub5-2000 evaluation. DBLSTM-HMMs trained with 300 or 2000 hours of SWB data achieves $<$0.5% and $<$1% average WER degradation, respectively. On the more challenging RNN-T models, our quantization strategy limits degradation in 4-bit inference to 1.3%.) <|cite_end|> <|cite_start|> (Reference: Quantization Aware Training with Absolute-Cosine Regularization for Automatic Speech Recognition.: Compression and quantization is important to neural networks in general and Automatic Speech Recognition (ASR) systems in particular, especially when they operate in real-time on resource-constrained devices. By using fewer number of bits for the model weights, the model size becomes much smaller while inference time is reduced significantly, with the cost of degraded performance. Such degradation can be potentially addressed by the so-called quantization-aware training (QAT). Existing QATs mostly take into account the quantization in forward propagation, while ignoring the quantization loss in gradient calculation during back-propagation. In this work, we introduce a novel QAT scheme based on absolute-cosine regularization (ACosR), which enforces a prior, quantization-friendly distribution to the model weights. We apply this novel approach into ASR task assuming a recurrent neural network transducer (RNN-T) architecture. The results show that there is zero to little degradation between floating-point, 8-bit, and 6-bit ACosR models. Weight distributions further confirm that in-training weights are very close to quantization levels when ACosR is applied.) <|cite_end|> and language models <|cite_start|> (Reference: Mixed precision quantization of transformer language models for speech recognition: State-of-the-art neural language models represented by Transformers are becoming increasingly complex and expensive for practical applications. Low-bit deep neural network quantization techniques provides a powerful solution to dramatically reduce their model size. Current low-bit quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of the system to quantization errors. To this end, novel mixed precision DNN quantization methods are proposed in this paper. The optimal local precision settings are automatically learned using two techniques. The first is based on a quantization sensitivity metric in the form of Hessian trace weighted quantization perturbation. The second is based on mixed precision Transformer architecture search. Alternating direction methods of multipliers (ADMM) are used to efficiently train mixed precision quantized DNN systems. Experiments conducted on Penn Treebank (PTB) and a Switchboard corpus trained LF-MMI TDNN system suggest the proposed mixed precision Transformer quantization techniques achieved model size compression ratios of up to 16 times over the full precision baseline with no recognition performance degradation. When being used to compress a larger full precision Transformer LM with more layers, overall word error rate (WER) reductions up to 1.7% absolute (18% relative) were obtained.) <|cite_end|>. In addition, prior researches on light weight speech enhancement approaches were based on neural structural sparsity compression <|cite_start|> (Reference: Towards model compression for deep learning based speech enhancement: The use of deep neural networks (DNNs) has dramatically elevated the performance of speech enhancement over the last decade. However, to achieve strong enhancement performance typically requires a large DNN, which is both memory and computation consuming, making it difficult to deploy such speech enhancement systems on devices with limited hardware resources or in applications with strict latency requirements. In this study, we propose two compression pipelines to reduce the model size for DNN-based speech enhancement, which incorporates three different techniques: sparse regularization, iterative pruning and clustering-based quantization. We systematically investigate these techniques and evaluate the proposed compression pipelines. Experimental results demonstrate that our approach reduces the sizes of four different models by large margins without significantly sacrificing their enhancement performance. In addition, we find that the proposed approach performs well on speaker separation, which further demonstrates the effectiveness of the approach for compressing speech separation models.) <|cite_end|> rather than the proposed mixed precision low-bit quantization methods. Second, the proposed 8-bit KL mixed precision quantized speech separation system achieved a “lossless” quantization over the full precision 32-bit baseline in terms of speech recognition accuracy.
The rest of this paper is organized as follows. The multi-channel speech separation system is reviewed in section 2. Uniform precision neural network quantization methods are introduced in section 3. Section 4 presents mixed precision quantization methods. Experiments and results are shown in section 5. Finally, conclusions and future work are discussed in section 6.
\vspace{-0.5em} <|paper_end|> | [
"<|reference_start|> Dual-Path Modeling for Long Recording Speech Separation in Meetings: The continuous speech separation (CSS) is a task to separate the speech sources from a long, partially overlapped recording, which involves a varying number of speakers. A straightforward extension of conventional utterance-level speech separation to the CSS task is to segment the long recording with a size-fixed window and process each window separately. Though effective, this extension fails to model the long dependency in speech and thus leads to sub-optimum performance. The recent proposed dual-path modeling could be a remedy to this problem, thanks to its capability in jointly modeling the cross-window dependency and the local-window processing. In this work, we further extend the dual-path modeling framework for CSS task. A transformer-based dual-path system is proposed, which integrates transform layers for global modeling. The proposed models are applied to LibriCSS, a real recorded multi-talk dataset, and consistent WER reduction can be observed in the ASR evaluation for separated speech. Also, a dual-path transformer equipped with convolutional layers is proposed. It significantly reduces the computation amount by 30% with better WER evaluation. Furthermore, the online processing dual-path models are investigated, which shows 10% relative WER reduction compared to the baseline. <|reference_end|>",
"<|reference_start|> BinaryConnect: Training Deep Neural Networks with binary weights during propagations: Deep Neural Networks (DNN) have achieved state-of-the-art results in a wide range of tasks, with the best results obtained with large training sets and large models. In the past, GPUs enabled these breakthroughs because of their greater computational speed. In the future, faster computation at both training and test time is likely to be crucial for further progress and for consumer applications on low-power devices. As a result, there is much interest in research and development of dedicated hardware for Deep Learning (DL). Binary weights, i.e., weights which are constrained to only two possible values (e.g. -1 or 1), would bring great benefits to specialized DL hardware by replacing many multiply-accumulate operations by simple accumulations, as multipliers are the most space and power-hungry components of the digital implementation of neural networks. We introduce BinaryConnect, a method which consists in training a DNN with binary weights during the forward and backward propagations, while retaining precision of the stored weights in which gradients are accumulated. Like other dropout schemes, we show that BinaryConnect acts as regularizer and we obtain near state-of-the-art results with BinaryConnect on the permutation-invariant MNIST, CIFAR-10 and SVHN. <|reference_end|>",
"<|reference_start|> HAWQ-V2: Hessian Aware trace-Weighted Quantization of Neural Networks: Quantization is an effective method for reducing memory footprint and inference time of Neural Networks, e.g., for efficient inference in the cloud, especially at the edge. However, ultra low precision quantization could lead to significant degradation in model generalization. A promising method to address this is to perform mixed-precision quantization, where more sensitive layers are kept at higher precision. However, the search space for a mixed-precision quantization is exponential in the number of layers. Recent work has proposed HAWQ, a novel Hessian based framework, with the aim of reducing this exponential search space by using second-order information. While promising, this prior work has three major limitations: (i) HAWQV1 only uses the top Hessian eigenvalue as a measure of sensitivity and do not consider the rest of the Hessian spectrum; (ii) HAWQV1 approach only provides relative sensitivity of different layers and therefore requires a manual selection of the mixed-precision setting; and (iii) HAWQV1 does not consider mixed-precision activation quantization. Here, we present HAWQV2 which addresses these shortcomings. For (i), we perform a theoretical analysis showing that a better sensitivity metric is to compute the average of all of the Hessian eigenvalues. For (ii), we develop a Pareto frontier based method for selecting the exact bit precision of different layers without any manual selection. For (iii), we extend the Hessian analysis to mixed-precision activation quantization. We have found this to be very beneficial for object detection. We show that HAWQV2 achieves new state-of-the-art results for a wide range of tasks. <|reference_end|>",
"<|reference_start|> Mixed precision quantization of transformer language models for speech recognition: State-of-the-art neural language models represented by Transformers are becoming increasingly complex and expensive for practical applications. Low-bit deep neural network quantization techniques provides a powerful solution to dramatically reduce their model size. Current low-bit quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of the system to quantization errors. To this end, novel mixed precision DNN quantization methods are proposed in this paper. The optimal local precision settings are automatically learned using two techniques. The first is based on a quantization sensitivity metric in the form of Hessian trace weighted quantization perturbation. The second is based on mixed precision Transformer architecture search. Alternating direction methods of multipliers (ADMM) are used to efficiently train mixed precision quantized DNN systems. Experiments conducted on Penn Treebank (PTB) and a Switchboard corpus trained LF-MMI TDNN system suggest the proposed mixed precision Transformer quantization techniques achieved model size compression ratios of up to 16 times over the full precision baseline with no recognition performance degradation. When being used to compress a larger full precision Transformer LM with more layers, overall word error rate (WER) reductions up to 1.7% absolute (18% relative) were obtained. <|reference_end|>"
] | [
8,
14,
18,
22
] | {"<|multi_cite_1_1|>": "arxiv-204763", "<|multi_cite_1_2|>": "ss-1116795", "<|multi_cite_2_1|>": "ss-749058", "<|multi_cite_2_2|>": "ss-1357836", "<|multi_cite_3_1|>": "ss-1319646", "<|multi_cite_3_2|>": "ss-1520792", "<|cite_4|>": "ss-789951", "<|multi_cite_5_1|>": "arxiv-228804", "<|multi_cite_5_2|>": "arxiv-323085", "<|cite_6|>": "arxiv-253868", "<|multi_cite_7_1|>": "ss-1384657", "<|multi_cite_7_2|>": "ss-1856491", "<|multi_cite_7_3|>": "arxiv-130780", "<|multi_cite_7_4|>": "ss-1384658", "<|multi_cite_8_1|>": "arxiv-86395", "<|multi_cite_8_2|>": "arxiv-383963", "<|multi_cite_8_3|>": "arxiv-363231", "<|cite_9|>": "ss-968155", "<|cite_10|>": "arxiv-233329", "<|cite_11|>": "arxiv-171621", "<|multi_cite_12_1|>": "arxiv-363231", "<|multi_cite_12_2|>": "ss-710341", "<|multi_cite_13_1|>": "ss-1351469", "<|cite_14|>": "ss-1784052"} |
2407.17783 | <|paper_start|> Title: How Lightweight Can A Vision Transformer Be
Abstract: How Lightweight Can A Vision Transformer Be: In this paper, we explore a strategy that uses Mixture-of-Experts (MoE) to streamline, rather than augment, vision transformers. Each expert in an MoE layer is a SwiGLU feedforward network, where V and W2 are shared across the layer. No complex attention or convolutional mechanisms are employed. Depth-wise scaling is applied to progressively reduce the size of the hidden layer and the number of experts is increased in stages. Grouped query attention is used. We studied the proposed approach with and without pre-training on small datasets and investigated whether transfer learning works at this scale. We found that the architecture is competitive even at a size of 0.67M parameters.
Introduction
\label{sec:intro}
In real-world applications of computer vision, such as edge intelligence, small and performant models are still preferred to overcome computational challenges <|cite_start|> (Reference: Bringing AI To Edge: From Deep Learning's Perspective: ) <|cite_end|>. Vision Transformers (ViTs) <|cite_start|> (Reference: An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train.) <|cite_end|> have achieved remarkable results, but their performance drops significantly when the model size and dataset are small <|cite_start|> (Reference: SPAWC 2023 Cover Page: ) <|cite_end|>. Consequently, there are studies investigating lightweight vision transformers that perform well on mid-size datasets. Almost all of these studies either use new types of attention blocks <|cite_start|> (Reference: EdgeViTs: Competing Light-weight CNNs on Mobile Devices with Vision Transformers: Self-attention based models such as vision transformers (ViTs) have emerged as a very competitive architecture alternative to convolutional neural networks (CNNs) in computer vision. Despite increasingly stronger variants with ever-higher recognition accuracies, due to the quadratic complexity of self-attention, existing ViTs are typically demanding in computation and model size. Although several successful design choices (e.g., the convolutions and hierarchical multi-stage structure) of prior CNNs have been reintroduced into recent ViTs, they are still not sufficient to meet the limited resource requirements of mobile devices. This motivates a very recent attempt to develop light ViTs based on the state-of-the-art MobileNet-v2, but still leaves a performance gap behind. In this work, pushing further along this under-studied direction we introduce EdgeViTs, a new family of light-weight ViTs that, for the first time, enable attention-based vision models to compete with the best light-weight CNNs in the tradeoff between accuracy and on-device efficiency. This is realized by introducing a highly cost-effective local-global-local (LGL) information exchange bottleneck based on optimal integration of self-attention and convolutions. For device-dedicated evaluation, rather than relying on inaccurate proxies like the number of FLOPs or parameters, we adopt a practical approach of focusing directly on on-device latency and, for the first time, energy efficiency. Specifically, we show that our models are Pareto-optimal when both accuracy-latency and accuracy-energy trade-offs are considered, achieving strict dominance over other ViTs in almost all cases and competing with the most efficient CNNs. Code is available at https://github.com/saic-fi/edgevit.) <|cite_end|> <|cite_start|> (Reference: IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021, Montreal, QC, Canada, October 11-17, 2021: ) <|cite_end|> or integrate convolutional mechanisms <|cite_start|> (Reference: MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer: Light-weight convolutional neural networks (CNNs) are the de-facto for mobile vision tasks. Their spatial inductive biases allow them to learn representations with fewer parameters across different vision tasks. However, these networks are spatially local. To learn global representations, self-attention-based vision trans-formers (ViTs) have been adopted. Unlike CNNs, ViTs are heavy-weight. In this paper, we ask the following question: is it possible to combine the strengths of CNNs and ViTs to build a light-weight and low latency network for mobile vision tasks? Towards this end, we introduce MobileViT, a light-weight and general-purpose vision transformer for mobile devices. MobileViT presents a different perspective for the global processing of information with transformers, i.e., transformers as convolutions. Our results show that MobileViT significantly outperforms CNN- and ViT-based networks across different tasks and datasets. On the ImageNet-1k dataset, MobileViT achieves top-1 accuracy of 78.4% with about 6 million parameters, which is 3.2% and 6.2% more accurate than MobileNetv3 (CNN-based) and DeIT (ViT-based) for a similar number of parameters. On the MS-COCO object detection task, MobileViT is 5.7% more accurate than MobileNetv3 for a similar number of parameters. Our source code is open-source and available at: https://github.com/apple/ml-cvnets) <|cite_end|> into their architectures.
On the other hand, Tan <|cite_start|> (Reference: Pre-training of Lightweight Vision Transformers on Small Datasets with Minimally Scaled Images: Can a lightweight Vision Transformer (ViT) match or exceed the performance of Convolutional Neural Networks (CNNs) like ResNet on small datasets with small image resolutions? This report demonstrates that a pure ViT can indeed achieve superior performance through pre-training, using a masked auto-encoder technique with minimal image scaling. Our experiments on the CIFAR-10 and CIFAR-100 datasets involved ViT models with fewer than 3.65 million parameters and a multiply-accumulate (MAC) count below 0.27G, qualifying them as 'lightweight' models. Unlike previous approaches, our method attains state-of-the-art performance among similar lightweight transformer-based architectures without significantly scaling up images from CIFAR-10 and CIFAR-100. This achievement underscores the efficiency of our model, not only in handling small datasets but also in effectively processing images close to their original scale.) <|cite_end|> has shown that by employing Masked Auto-Encoder (MAE) <|cite_start|> (Reference: Masked Autoencoders Are Scalable Vision Learners: This paper shows that masked autoencoders (MAE) are scalable self-supervised learners for computer vision. Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels. It is based on two core designs. First, we develop an asymmetric encoder-decoder architecture, with an encoder that operates only on the visible subset of patches (without mask tokens), along with a lightweight decoder that reconstructs the original image from the latent representation and mask tokens. Second, we find that masking a high proportion of the input image, e.g., 75%, yields a nontrivial and meaningful self-supervisory task. Coupling these two designs enables us to train large models efficiently and effectively: we accelerate training (by 3x or more) and improve accuracy. Our scalable approach allows for learning high-capacity models that generalize well: e.g., a vanilla ViT-Huge model achieves the best accuracy (87.8%) among methods that use only ImageNet-1K data. Transfer performance in downstream tasks outperforms supervised pre-training and shows promising scaling behavior.) <|cite_end|> as a pre-training strategy, it is possible to get ViT to learn effectively from small datasets. In that work, the ViT consists of 12 transformer encoder layers, each containing a multi-head attention component and a feedforward network. The feedforward network consists of two linear layers: the first expands the output to \textbf{twice}, rather than four times, the embedding size, and the second reduces the output back to the embedding size. To further lighten the model, reducing the expanded output size in the middle of the feedforward network can help, but excessive reduction can negatively affect model performance.
With these considerations in mind, we designed an architecture that uses Mixture-of-Experts (MoE) <|cite_start|> (Reference: Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer: The capacity of a neural network to absorb information is limited by its number of parameters. Conditional computation, where parts of the network are active on a per-example basis, has been proposed in theory as a way of dramatically increasing model capacity without a proportional increase in computation. In practice, however, there are significant algorithmic and performance challenges. In this work, we address these challenges and finally realize the promise of conditional computation, achieving greater than 1000x improvements in model capacity with only minor losses in computational efficiency on modern GPU clusters. We introduce a Sparsely-Gated Mixture-of-Experts layer (MoE), consisting of up to thousands of feed-forward sub-networks. A trainable gating network determines a sparse combination of these experts to use for each example. We apply the MoE to the tasks of language modeling and machine translation, where model capacity is critical for absorbing the vast quantities of knowledge available in the training corpora. We present model architectures in which a MoE with up to 137 billion parameters is applied convolutionally between stacked LSTM layers. On large language modeling and machine translation benchmarks, these models achieve significantly better results than state-of-the-art at lower computational cost.) <|cite_end|> to streamline vision transformers. In our architecture, each expert in a MoE layer is formed by a SwiGLU <|cite_start|> (Reference: GLU Variants Improve Transformer: Gated Linear Units (arXiv:1612.08083) consist of the component-wise product of two linear projections, one of which is first passed through a sigmoid function. Variations on GLU are possible, using different nonlinear (or even linear) functions in place of sigmoid. We test these variants in the feed-forward sublayers of the Transformer (arXiv:1706.03762) sequence-to-sequence model, and find that some of them yield quality improvements over the typically-used ReLU or GELU activations.) <|cite_end|> feedforward network. By design, SwiGLU is heavier in terms of parameters compared to a typical multi-layer perceptron. However, with several experts in a MoE layer, we are able to make the hidden size in SwiGLU smaller than the embedding size without negatively affecting model performance. Furthermore, we share 2 out of the 3 linear transformations in each SwiGLU across the layer. This helps to significantly lower the parameter count while maintaining the strength of MoE. Beyond that, to further reduce the number of parameters, we progressively increase the number of experts in the MoE in stages, while linearly reducing the hidden size by depth, borrowing the idea from depth-wise scaling <|cite_start|> (Reference: DeLighT: Deep and Light-weight Transformer: We introduce a deep and light-weight transformer, DeLighT, that delivers similar or better performance than standard transformer-based models with significantly fewer parameters. DeLighT more efficiently allocates parameters both (1) within each Transformer block using the DeLighT transformation, a deep and light-weight transformation, and (2) across blocks using block-wise scaling, which allows for shallower and narrower DeLighT blocks near the input and wider and deeper DeLighT blocks near the output. Overall, DeLighT networks are 2.5 to 4 times deeper than standard transformer models and yet have fewer parameters and operations. Experiments on benchmark machine translation and language modeling tasks show that DeLighT matches or improves the performance of baseline Transformers with 2 to 3 times fewer parameters on average. Our source code is available at: \url{https://github.com/sacmehta/delight}) <|cite_end|>. Lastly, we use grouped query attention <|cite_start|> (Reference: GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints: Multi-query attention (MQA), which only uses a single key-value head, drastically speeds up decoder inference. However, MQA can lead to quality degradation, and moreover it may not be desirable to train a separate model just for faster inference. We (1) propose a recipe for uptraining existing multi-head language model checkpoints into models with MQA using 5% of original pre-training compute, and (2) introduce grouped-query attention (GQA), a generalization of multi-query attention which uses an intermediate (more than one, less than number of query heads) number of key-value heads. We show that uptrained GQA achieves quality close to multi-head attention with comparable speed to MQA.) <|cite_end|> to keep the parameter count low. Source code will be provided in near future. <|paper_end|> | [
"<|reference_start|> Bringing AI To Edge: From Deep Learning's Perspective: <|reference_end|>",
"<|reference_start|> An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train. <|reference_end|>",
"<|reference_start|> SPAWC 2023 Cover Page: <|reference_end|>",
"<|reference_start|> IEEE/CVF International Conference on Computer Vision Workshops, ICCVW 2021, Montreal, QC, Canada, October 11-17, 2021: <|reference_end|>"
] | [
0,
1,
2,
4
] | {"<|cite_1|>": "ss-748766", "<|cite_2|>": "arxiv-298443", "<|cite_3|>": "ss-685902", "<|multi_cite_4_1|>": "arxiv-417977", "<|multi_cite_4_2|>": "ss-682260", "<|cite_5|>": "arxiv-371693", "<|cite_6|>": "arxiv-582608", "<|cite_7|>": "arxiv-380525", "<|cite_8|>": "arxiv-114965", "<|cite_9|>": "arxiv-247953", "<|cite_10|>": "arxiv-282260", "<|cite_11|>": "arxiv-507406"} |
1702.03429 | <|paper_start|> Title: Decoupled Sampling Based Planning Method for Multiple Autonomous Vehicles
Abstract: Decoupled Sampling Based Planning Method for Multiple Autonomous Vehicles: This paper proposes a sampling based planning algorithm to control autonomous vehicles. We propose an improved Rapidly-exploring Random Tree which includes the definition of K- nearest points and propose a two-stage sampling strategy to adjust RRT in other to perform maneuver while avoiding collision. The simulation results show the success of the algorithm.
Introduction
Motion planning plays an important role in navigation of autonomous vehicles. In presence of constraints, such as collision avoidance, speed limits and rules of motion, it guarantees to find a trajectory from initial point to the goal point.
In recent studies, different methods have been proposed and developed in this field. A vast introduction to motion and path planning problems and existing techniques and solutions can be found in <|cite_start|> (Reference: {Planning algorithms: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.) <|cite_end|>, <|cite_start|> (Reference: Incremental Sampling-based Algorithm for Minimum-violation Motion Planning: This paper studies the problem of control strategy synthesis for dynamical systems with differential constraints to fulfill a given reachability goal while satisfying a set of safety rules. Particular attention is devoted to goals that become feasible only if a subset of the safety rules are violated. The proposed algorithm computes a control law, that minimizes the level of unsafety while the desired goal is guaranteed to be reached. This problem is motivated by an autonomous car navigating an urban environment while following rules of the road such as "always travel in right lane'' and "do not change lanes frequently''. Ideas behind sampling based motion-planning algorithms, such as Probabilistic Road Maps (PRMs) and Rapidly-exploring Random Trees (RRTs), are employed to incrementally construct a finite concretization of the dynamics as a durational Kripke structure. In conjunction with this, a weighted finite automaton that captures the safety rules is used in order to find an optimal trajectory that minimizes the violation of safety rules. We prove that the proposed algorithm guarantees asymptotic optimality, i.e., almost-sure convergence to optimal solutions. We present results of simulation experiments and an implementation on an autonomous urban mobility-on-demand system.) <|cite_end|>, <|cite_start|> (Reference: Distributed receding horizon coverage control for multiple mobile robots: This paper presents a distributed coverage control scheme based on receding horizon control for the coordination of multiple mobile robots to cover an event. The mission space is modeled using a probability density function representing the probability of occurrence of events. The distributed scheme is generated by the decomposition of a single optimal coverage problem into distributed receding horizon coverage control problems, each of them associated with one robot. The distributed coverage scheme is proven to optimally stabilize robots at a centroidal Voronoi configuration, which is an optimal configuration to cover an event. The control scheme is tested in three simulation environments to illustrate its good performance at environments with any probability density function of events, as well as its ability to generalize to much larger groups of mobile robots.) <|cite_end|>, <|cite_start|> (Reference: Analysis of robot motion performance and implications to economy principles: This paper utilizes the idea of "robot ergonomics" in which robotic tasks are evaluated according to ergonomics criteria. Robot performance is continuously recorded in terms of total motion for each robotic link, the load applied on each joint, and the accuracy of each motion. Evaluating robot motion can assist in the design of a robotic cell, where robot position and layout of peripheral equipment are determined. In previous work, the main measure has been arm joint utilization (AJU). Two new measures are developed here: arm joint accuracy (AJA) and arm joint load (AJL). A test case of an articulated industrial robot, performing a kitting task is presented. Robot performance is evaluated for various bin positions, and based on these results, the bin position relative to the robot is determined based on all three measures, AJA, AJL and AJU.) <|cite_end|>, <|cite_start|> (Reference: Principles of robot motion: theory, algorithms, and implementations: Robot motion planning has become a major focus of robotics. Research findings can be applied not only to robotics but to planning routes on circuit boards, directing digital actors in computer graphics, robot-assisted surgery and medicine, and in novel areas such as drug design and protein folding. This text reflects the great advances that have taken place in the last ten years, including sensor-based planning, probabalistic planning, localization and mapping, and motion planning for dynamic and nonholonomic systems. Its presentation makes the mathematical underpinnings of robot motion accessible to students of computer science and engineering, rleating low-level implementation details to high-level algorithmic concepts.) <|cite_end|>.
It depends on the nature of the problems that which methods are more appropriate and work better than other methods.
For an ideal motion planner, there exist a few requirements, including computational complexity, optimality and completeness. However, few of them try and can solve the planning problem in its complete generality <|cite_start|> (Reference: {Robot Motion Planning: Robot motion planning is the problem of automatically generating robot trajectories and control laws from high-level specifications. This article covers the main ideas and concepts and reviews some of the results and approaches. Configuration and work spaces, potential-based approaches, road maps, sampling-based algorithms, and cell decompositions are among the included topics. This article also provides an introduction to the emerging area of symbolic motion planning and control.
Keywords:
motion planning;
configuration space;
cell decomposition;
potential function;
sampling;
model checking;
symbolic methods) <|cite_end|>.
Several heuristic search algorithms for path planning, have been proposed and used in the known workspace, such as $A^*$, $Grassfire$, $Dijkstra$, and $D^*$. There are other methods which are based on model predictive control (MPC), and allow to plan trajectories while taking in to account the complex vehicle dynamics <|cite_start|> (Reference: Vehicle yaw stability control by coordinated active front steering and differential braking in the tire sideslip angles domain: Vehicle active safety receives ever increasing attention in the attempt to achieve zero accidents on the road. In this paper, we investigate a control architecture that has the potential of improving yaw stability control by achieving faster convergence and reduced impact on the longitudinal dynamics. We consider a system where active front steering and differential braking are available and propose a model predictive control (MPC) strategy to coordinate the actuators. We formulate the vehicle dynamics with respect to the tire slip angles and use a piecewise affine (PWA) approximation of the tire force characteristics. The resulting PWA system is used as prediction model in a hybrid MPC strategy. After assessing the benefits of the proposed approach, we synthesize the controller by using a switched MPC strategy, where the tire conditions (linear/saturated) are assumed not to change during the prediction horizon. The assessment of the controller computational load and memory requirements indicates that it is capable of real-time execution in automotive-grade electronic control units. Experimental tests in different maneuvers executed on low-friction surfaces demonstrate the high performance of the controller.) <|cite_end|>. For example, an algorithm based on MPC is proposed in <|cite_start|> (Reference: Development of a genetic-algorithm-based nonlinear model predictive control scheme on velocity and steering of autonomous vehicles: Model predictive controller (MPC) has demonstrated its competency in controlling autonomous vehicles. But to apply the current MPC-based schemes, it has to be formulated into certain formats and meet all prerequisites in order to fit the optimization solvers. To eliminate the gaps, in this paper, we propose a nonlinear MPC controller which controls the vehicle velocity and steering simultaneously. The optimization solver is based on genetic algorithms (GA). As compared to other solvers, using GA in the optimization enables a more flexible structure for MPC formulation. The cost function and constraints can be designed in a more accurate, meaningful, and direct way. Both simulation and on-field test results showed that the vehicle under the control of the proposed nonlinear MPC is able to follow the road center line accurately and consistently, even at sharp corners. Moreover, the results also showed that passengers' safety and comfort can be well taken care of under the proposed MPC scheme as both the vehicle movement acceleration and steering acceleration are well confined within a safety range. The promising results indicate that the proposed GA-based nonlinear MPC can be a suitable solution to autonomous vehicle control.) <|cite_end|> for real-time obstacle avoidance for ground vehicles.
MPC also has been combined with motion primitives in <|cite_start|> (Reference: Predictive control for agile semi-autonomous ground vehicles using motion primitives: This paper presents a hierarchical control framework for the obstacle avoidance of autonomous and semi-autonomous ground vehicles. The high-level planner is based on motion primitives created from a four-wheel nonlinear dynamic model. Parameterized clothoids and drifting maneuvers are used to improve vehicle agility. The low-level tracks the planned trajectory with a nonlinear Model Predictive Controller. The first part of the paper describes the proposed control architecture and methodology. The second part presents simulative and experimental results with an autonomous and semi-autonomous ground vehicle traveling at high speed on an icy surface.) <|cite_end|>, <|cite_start|> (Reference: {Robot Motion Planning: Robot motion planning is the problem of automatically generating robot trajectories and control laws from high-level specifications. This article covers the main ideas and concepts and reviews some of the results and approaches. Configuration and work spaces, potential-based approaches, road maps, sampling-based algorithms, and cell decompositions are among the included topics. This article also provides an introduction to the emerging area of symbolic motion planning and control.
Keywords:
motion planning;
configuration space;
cell decomposition;
potential function;
sampling;
model checking;
symbolic methods) <|cite_end|>, <|cite_start|> (Reference: From Dynamic Programming to RRTs: Algorithmic Design of Feasible Trajectories: ) <|cite_end|> in order to plan controls for fast maneuvering of ground vehicles.
For autonomous vehicle applications, where the vehicle has to move in an environment which is obstacle rich, the computational complexity of the motion planning algorithm is an important issue.
Since, the vehicles usually move at high speed, the path planner has to find a collision free path quickly.
The computational time of complete and deterministic complete motion planning algorithms grows exponentially with the dimension of the configuration space. Hence, these algorithms usually are not appropriate for real time path planning problems for autonomous vehicles, especially for the problems that contain rich obstacles.
Furthermore, the optimal path of a vehicle may become infeasible due to different static and dynamic obstacles. Therefore, if during the high-speed movement of a vehicle, a preplanned trajectory becomes infeasible, multiple candidate trajectories are required. An alternative technique for these situations is to use sampling-based algorithms.
Recently, probabilistic sampling-based methods, such as rapidly exploring random trees algorithm (RRT) <|cite_start|> (Reference: Randomized kinodynamic planning for robust visual servoing: We incorporate a randomized kinodynamic path planning approach with image-based control of a robotic arm equipped with an in-hand camera. The proposed approach yields continuously differentiable camera trajectories by taking camera dynamics into account, while accounting for a critical set of image and physical constraints at the planning stage. The proposed planner explores the camera state space for permissible trajectories by iteratively extending a search tree in this space and simultaneously tracking these trajectories in the robot configuration space. The planned camera trajectories are projected into the image space to obtain desired feature trajectories which are then tracked using an image-based visual servoing scheme. We validate the effectiveness of the proposed framework in incorporating the aforementioned constraints through a number of visual servoing experiments on a six-degree-of-freedom robotic arm. We also provide empirical results that demonstrate its performance in the presence of uncertainties, and accordingly suggest additional planning strategies to increase robustness with respect to possible deviations from planned trajectories.) <|cite_end|>, probabilistic roadmap algorithm (PRM) <|cite_start|> (Reference: Probabilistic roadmaps for path planning in high-dimensional
configuration spaces: A new motion planning method for robots in static workspaces is presented. This method proceeds in two phases: a learning phase and a query phase. In the learning phase, a probabilistic roadmap is constructed and stored as a graph whose nodes correspond to collision-free configurations and whose edges correspond to feasible paths between these configurations. These paths are computed using a simple and fast local planner. In the query phase, any given start and goal configurations of the robot are connected to two nodes of the roadmap; the roadmap is then searched for a path joining these two nodes. The method is general and easy to implement. It can be applied to virtually any type of holonomic robot. It requires selecting certain parameters (e.g., the duration of the learning phase) whose values depend on the scene, that is the robot and its workspace. But these values turn out to be relatively easy to choose, Increased efficiency can also be achieved by tailoring some components of the method (e.g., the local planner) to the considered robots. In this paper the method is applied to planar articulated robots with many degrees of freedom. Experimental results show that path planning can be done in a fraction of a second on a contemporary workstation (/spl ap/150 MIPS), after learning for relatively short periods of time (a few dozen seconds).) <|cite_end|> and PRM$^*$, have been proposed and developed for robot and vehicle path planning.
These sampling based algorithms made it possible to solve motion planning problems that was considered infeasible before <|cite_start|> (Reference: Principles of robot motion: theory, algorithms, and implementations: Robot motion planning has become a major focus of robotics. Research findings can be applied not only to robotics but to planning routes on circuit boards, directing digital actors in computer graphics, robot-assisted surgery and medicine, and in novel areas such as drug design and protein folding. This text reflects the great advances that have taken place in the last ten years, including sensor-based planning, probabalistic planning, localization and mapping, and motion planning for dynamic and nonholonomic systems. Its presentation makes the mathematical underpinnings of robot motion accessible to students of computer science and engineering, rleating low-level implementation details to high-level algorithmic concepts.) <|cite_end|> especially in high dimension and complex environments.
In these algorithms, instead of requiring to have an explicit expression of the configuration space, a roadmap which is a topological graph is constructed which represents the path alternatives.
Although these sampling based algorithms are not complete, they provide probabilistic completeness to ensure planning as successful as possible.
When there is at least one feasible path, as the number of sampling nodes tends to infinity, the probability of failure of the algorithm to find a feasible path will exponentially decay to zero. However, the selection of random node leads to different planning costs. Based on this, in recent days, different asymptotically optimal RRT-based path planning algorithms were proposed in <|cite_start|> (Reference: Optimal Sampling-Based Planning for Linear-Quadratic Kinodynamic Systems: We propose a new method for applying RRT* to kinodynamic motion planning problems by using finite-horizon linear quadratic regulation (LQR) to measure cost and to extend the tree. First, we introduce the method in the context of arbitrary affine dynamical systems with quadratic costs. For these systems, the algorithm is shown to converge to optimal solutions almost surely. Second, we extend the algorithm to non-linear systems with non-quadratic costs, and demonstrate its performance experimentally.) <|cite_end|> - <|cite_start|> (Reference: Sampling-based optimal motion planning for non-holonomic dynamical
systems: Sampling-based motion planning algorithms, such as the Probabilistic RoadMap (PRM) and the Rapidly-exploring Random Tree (RRT), have received a large and growing amount of attention during the past decade. Most recently, sampling-based algorithms, such as the PRM* and RRT*, that guarantee asymptotic optimality, i.e., almost-sure convergence towards optimal solutions, have been proposed. Despite the experimental success of asymptotically-optimal sampling-based algorithms, their extensions to handle complex non-holonomic dynamical systems remains largely an open problem. In this paper, with the help of results from differential geometry, we extend the RRT* algorithm to handle a large class of non-holonomic dynamical systems. We demonstrate the performance of the algorithm in computational experiments involving the Dubins' car dynamics.) <|cite_end|>, and <|cite_start|> (Reference: Robust sampling-based motion planning for autonomous tracked vehicles in deformable high slip terrain: This paper presents an optimal global planner for autonomous tracked vehicles navigating in off-road terrain with uncertain slip, which affects the vehicle as a process noise. This paper incorporates two fields of study: slip estimation and motion planning. For slip estimation, an experimental result from [9] is used to model the effect of the slip on the vehicle in various soil types. For motion planning, a robust incremental sampling based motion planning algorithm (CC-RRT*) is combined with the LQG-MP algorithm. CC-RRT* yields the optimal and probabilistically feasible trajectory by using a chance constrained approach under the RRT* framework. LQG-MP provides the capability of considering the role of compensator in the motion planning phase and bounds the degree of uncertainty to appropriate size. In simulation, the planner successfully finds the optimal and robust solution. In addition, the planner is compared with an RRT* algorithm with dilated obstacles to show that it avoids being overly conservative.) <|cite_end|>, <|cite_start|> (Reference: Anytime computation of time-optimal off-road vehicle maneuvers using the rrt*: Incremental sampling-based motion planning algorithms such as the Rapidly-exploring Random Trees (RRTs) have been successful in efficiently solving computationally challenging motion planning problems involving complex dynamical systems. A recently proposed algorithm, called the RRT*, also provides asymptotic optimality guarantees, i.e., almost-sure convergence to optimal trajectories (which the RRT algorithm lacked) while maintaining the computational efficiency of the RRT algorithm. In this paper, time-optimal maneuvers for a high-speed off-road vehicle taking tight turns on a loose surface are studied using the RRT* algorithm. Our simulation results show that the aggressive skidding maneuver, usually called the trail-braking maneuver, naturally emerges from the RRT* algorithm as the minimum-time trajectory. Along the way, we extend the RRT* algorithm to handle complex dynamical systems, such as those that are described by nonlinear differential equations and involve high-dimensional state spaces, which may be of independent interest. We also exploit the RRT* as an anytime computation framework for nonlinear optimization problems.) <|cite_end|>.
It has been shown that for RRT and other sampling-based path planners, the workspace is explored efficiently only when this planning cost function reflects the true cost-to-go <|cite_start|> (Reference: Reducing metric sensitivity in randomized trajectory design: This paper addresses the trajectory design for generic problems that involve: (1) complicated global constraints that include nonconvex obstacles, (2) nonlinear equations of motion that involve substantial drift due to momentum, and (3) a high-dimensional state space. Our approach to these challenging problems is to develop randomized planning algorithms based on rapidly-exploring random trees (RRTs). RRTs use metric-induced heuristics to conduct a greedy exploration of the state space; however, performance substantially degrades when the chosen metric does not adequately reflect the true cost-to-go. In this paper, we present a version of the RRT that refines its exploration strategy in the presence of a poor metric. Experiments on problems in vehicle dynamics and spacecraft navigation indicate substantial performance improvement over existing techniques.) <|cite_end|>.
As it has been shown in <|cite_start|> (Reference: From Dynamic Programming to RRTs: Algorithmic Design of Feasible Trajectories: ) <|cite_end|>, the choice of a distance metric as cost function to find the nearest node affects the performance of RRT-based algorithms significantly.
For our motion planning problem, we proposed two RRT based algorithms, one for path planning and the other one for motion timing in order to avoid collision between different vehicles.
In the past decade, the rapid development in the field of autonomous vehicles, going from single vehicle tasks to missions that require cooperation, coordination, and communication among a number of vehicles, makes the availability of adaptable motion planners more and more important.
When a group of vehicles are tasked to carry out a mission in a cooperative way in presence of complex obstacles, the inter vehicle collision adds to the complexity of the mission planning systems.
In such a systems, it is also required to guarantee that each vehicle meets spatial configuration constraints.
Several approaches have been suggested to solve the motion planning problem of multiple autonomous agents.
More specifically, in <|cite_start|> (Reference: Traffic management for automated highway systems using model-based predictive control: We present an integrated traffic management and control approach for automated highway systems (AHS). The AHS consist of interacting roadside controllers and intelligent vehicles that are organized in platoons with short intraplatoon distances and larger distances between platoons. All vehicles are assumed to be fully automated, i.e., throttle, braking, and steering commands are determined by an automated onboard controller. The proposed control approach is based on a hierarchical traffic control architecture for AHS, and it also takes the connection and transition between the nonautomated part of the road network and the AHS into account. In particular, we combine dynamic speed limits and lane allocation for the platoons on the AHS highways with access control for the on-ramps using ramp metering, and we propose a model-based predictive control approach to determine optimal speed limits and lane allocations, as well as optimal release times for the platoons at the on-ramps. To illustrate the potential of the proposed traffic control method, we apply it to a simple simulation example.) <|cite_end|> a vehicle-follower control based on model-based predictive control is proposed; in <|cite_start|> (Reference: Controlling a platoon of vehicles via a second order sliding mode approach: Abstract This paper presents the design of a longitudinal control system for a platoon of vehicles. The main objectives of an automatic vehicle following system are the increment of traffic capacity while improving safety and comfort. The distributed controllers, one for each vehicle, are designed relying on a simple nonlinear vehicle model. The chosen control technique is the so-called second order sliding mode control methodology. This choice is motivated by the robustness features of the sliding mode design, which are particularly appropriate dealing with the automotive context. Moreover, the proposed approach allows to circumvent the chattering problem, typical of sliding mode control. The individual vehicle stability and platoon stability are guaranteed by the proposed control system. This latter is tested in simulation considering a critical stop–and–go traffic situation.) <|cite_end|> a sliding mode longitudinal controller is used to control a group of vehicles which have inter-vehicle communication; <|cite_start|> (Reference: Cooperative adaptive cruise control implementation of team mekar at the grand cooperative driving challenge: This paper presents the cooperative adaptive cruise control implementation of Team Mekar at the Grand Cooperative Driving Challenge (GCDC). The Team Mekar vehicle used a dSpace microautobox for access to the vehicle controller area network bus and for control of the autonomous throttle intervention and the electric-motor-operated brake pedal. The vehicle was equipped with real-time kinematic Global Positioning System (RTK GPS) and an IEEE 802.11p modem installed in an onboard computer for vehicle-to-vehicle (V2V) communication. The Team Mekar vehicle did not have an original-equipment-manufacturer-supplied adaptive cruise control (ACC). ACC/Cooperative adaptive cruise control (CACC) based on V2V-communicated GPS position/velocity and preceding vehicle acceleration feedforward were implemented in the Team Mekar vehicle. This paper presents experimental and simulation results of the Team Mekar CACC implementation, along with a discussion of the problems encountered during the GCDC cooperative mobility runs.) <|cite_end|> suggested a cruise control, in which vehicle uses information about the spacing and the relative speed from the following and the preceding vehicles; in <|cite_start|> (Reference: Cooperative adaptive cruise control: Network-aware analysis of string stability: In this paper, we consider a Cooperative Adaptive Cruise Control (CACC) system, which regulates intervehicle distances in a vehicle string, for achieving improved traffic flow stability and throughput. Improved performance can be achieved by utilizing information exchange between vehicles through wireless communication in addition to local sensor measurements. However, wireless communication introduces network-induced imperfections, such as transmission delays, due to the limited bandwidth of the network and the fact that multiple nodes are sharing the same medium. Therefore, we approach the design of a CACC system from a Networked Control System (NCS) perspective and present an NCS modeling framework that incorporates the effect of sampling, hold, and network delays that occur due to wireless communication and sampled-data implementation of the CACC controller over this wireless link. Based on this network-aware modeling approach, we develop a technique to study the so-called string stability property of the string, in which vehicles are interconnected by a vehicle following control law and a constant time headway spacing policy. This analysis technique can be used to investigate tradeoffs between CACC performance (string stability) and network specifications (such as delays), which are essential in the multidisciplinary design of CACC controllers. Finally, we demonstrate the validity of the presented framework in practice by experiments performed with CACC-equipped prototype vehicles.) <|cite_end|>, a system of the automatic vehicle following has been suggested to adopt a constant spacing policy; and <|cite_start|> (Reference: Autonomous intelligent cruise control using a novel multiple-controller framework incorporating fuzzy-logic-based switching and tuning: ) <|cite_end|> suggested a cruise control algorithm using fuzzy concept. A game theory based approach is described by <|cite_start|> (Reference: Direct adaptive longitudinal control of vehicle platoons: Higher traffic capacities require smaller intervehicular spacings. At such intervehicular separations, aerodynamic drag force changes significantly with the distance to be maintained. The mass of the vehicle varies with the number of passengers. In this paper, we present a Lyapunov-based decentralised adaptive control algorithm to compensate for such parametric variations. We examine this direct adaptive control algorithm for platoon performance and parameter convergence. We present the simulation and experimental results to demonstrate the effectiveness of the adaptive controller.<<ETX>>) <|cite_end|>, <|cite_start|> (Reference: {Planning algorithms: Planning algorithms are impacting technical disciplines and industries around the world, including robotics, computer-aided design, manufacturing, computer graphics, aerospace applications, drug design, and protein folding. This coherent and comprehensive book unifies material from several sources, including robotics, control theory, artificial intelligence, and algorithms. The treatment is centered on robot motion planning but integrates material on planning in discrete spaces. A major part of the book is devoted to planning under uncertainty, including decision theory, Markov decision processes, and information spaces, which are the “configuration spaces” of all sensor-based planning problems. The last part of the book delves into planning under differential constraints that arise when automating the motions of virtually any mechanical system. Developed from courses taught by the author, the book is intended for students, engineers, and researchers in robotics, artificial intelligence, and control theory as well as computer graphics, algorithms, and computational biology.) <|cite_end|>, <|cite_start|> (Reference: A randomized roadmap method for path and manipulation planning: This paper presents a new randomized roadmap method for motion planning for many DOF robots that can be used to obtain high quality roadmaps even when C-space is crowded. The main novelty in the authors' approach is that roadmap candidate points are chosen on C-obstacle surfaces. As a consequence, the roadmap is likely to contain difficult paths, such as those traversing long, narrow passages in C-space. The approach can be used for both collision-free path planning and for manipulation planning of contact tasks. Experimental results with a planar articulated 6 DOF robot show that, after preprocessing, difficult path planning operations can often be carried out in less than a second.) <|cite_end|> and <|cite_start|> (Reference: Lp string stability of cascaded systems: Application to vehicle platooning: Nowadays, throughput has become a limiting factor in road transport. An effective means to increase the road throughput is to employ a small intervehicle time gap using automatic vehicle-following control systems. String stability, i.e., the disturbance attenuation along the vehicle string, is considered an essential requirement for the design of those systems. However, the formal notion of string stability is not unambiguous in literature, since both stability and performance interpretations exist. Therefore, a novel definition for string stability of nonlinear cascaded systems is proposed, using input-output properties. This definition is shown to result in well-known string stability conditions for linear cascaded systems. The theoretical results are experimentally validated using a platoon of six passenger vehicles equipped with cooperative adaptive cruise control.) <|cite_end|> to guarantee safety during the maneuver for all vehicles. <|cite_start|> (Reference: Static game approach for solving lane-merging conflict between autonomous vehicles: The right-of-way conflict such as lane-merging problem between Autonomous Vehicles (AVs) is an inescapable issue. Related studies formalize the problem using centralized decision-making models of “Reservation” or “Auction” from the perspective of intersection management, that are suitable only for the scenarios with a centralized intersection agent; and the fiat currency spent on bidding may trigger certain controversial issues concerning law or tax. This paper presents the first-stage results of a long-term work: (i) establishes a prototype of 2-player static game within distributed decision-making paradigm, to formalized and solve the lane-merging conflict between 2 AVs, in order to adapt to scenarios with or without a centralized decision-maker, i.e. intersection or road segment; (ii) designs the game's dynamic payoffs which result from the space-time status of AV rather than any financial currency, to avoid possible social controversies. Numerical results show that AV could promisingly make compromised decisions to avoid the potential deadlock of right-of-way conflict.) <|cite_end|> suggested a static game approach for lane merging maneuver.\\
When more than one autonomous vehicle work in the same area, the problem of vehicle collision has to be faced. Even if the mission space is planned and cleared of any conflict between cars, it may happen that vehicles collide. The collision can be due to different dynamic and kinematic characteristics, speed and external disturbances. Therefore, a key issue for a multivehicle maneuver is safety, represented by the requirement that cars never collide. Consequently, an approach that controls multiple autonomous vehicles, with a collision avoidance feature, becomes a way to improve the transportation system.
In this paper, we address the multiple-vehicle motion planning problem by dividing it into two phases: 1) planning path for each vehicles by proposing and using an improved RRT method which increases the optimality property of the standard RRT and decreases the computational time and 2) motion timing phase by proposing a semi deterministic sampling method which guarantees the collision avoidance between different vehicles. \\
The remainder of the paper is organized as follows. In Section \ref{Sec.Formulation}, the problem formulation is defined including our decoupled planning method definition, the vehicle model, and integration method analysis. Then, the IRRT algorithm for path planning and the motion timing algorithm are provided in Section \ref{Sec.DSBP} followed by decoupled sampling based algorithm for multiple vehicles . In section \ref{Sec.Simulation}, simulation results are provided to show the effectiveness of the proposed approach. Concluding remarks are given in Section \ref{Sec.Conclusion}.\\
\begin{table}[hb]
\begin{center}
\small
\caption{??.}\label{Tab.ODE}
\begin{tabular}{|c|c|c|}
\hline
??& ??& ??\\\hline
? & ?& ?\\\hline
? & ?& ?\\\hline
? & ?& ?\\\hline
? & ?& ?\\\hline
? & ?& ?\\\hline
?& ?& ?\\
\hline
\end{tabular}
\end{center}
\end{table} <|paper_end|> | [
"<|reference_start|> Incremental Sampling-based Algorithm for Minimum-violation Motion Planning: This paper studies the problem of control strategy synthesis for dynamical systems with differential constraints to fulfill a given reachability goal while satisfying a set of safety rules. Particular attention is devoted to goals that become feasible only if a subset of the safety rules are violated. The proposed algorithm computes a control law, that minimizes the level of unsafety while the desired goal is guaranteed to be reached. This problem is motivated by an autonomous car navigating an urban environment while following rules of the road such as \"always travel in right lane'' and \"do not change lanes frequently''. Ideas behind sampling based motion-planning algorithms, such as Probabilistic Road Maps (PRMs) and Rapidly-exploring Random Trees (RRTs), are employed to incrementally construct a finite concretization of the dynamics as a durational Kripke structure. In conjunction with this, a weighted finite automaton that captures the safety rules is used in order to find an optimal trajectory that minimizes the violation of safety rules. We prove that the proposed algorithm guarantees asymptotic optimality, i.e., almost-sure convergence to optimal solutions. We present results of simulation experiments and an implementation on an autonomous urban mobility-on-demand system. <|reference_end|>",
"<|reference_start|> Principles of robot motion: theory, algorithms, and implementations: Robot motion planning has become a major focus of robotics. Research findings can be applied not only to robotics but to planning routes on circuit boards, directing digital actors in computer graphics, robot-assisted surgery and medicine, and in novel areas such as drug design and protein folding. This text reflects the great advances that have taken place in the last ten years, including sensor-based planning, probabalistic planning, localization and mapping, and motion planning for dynamic and nonholonomic systems. Its presentation makes the mathematical underpinnings of robot motion accessible to students of computer science and engineering, rleating low-level implementation details to high-level algorithmic concepts. <|reference_end|>",
"<|reference_start|> Predictive control for agile semi-autonomous ground vehicles using motion primitives: This paper presents a hierarchical control framework for the obstacle avoidance of autonomous and semi-autonomous ground vehicles. The high-level planner is based on motion primitives created from a four-wheel nonlinear dynamic model. Parameterized clothoids and drifting maneuvers are used to improve vehicle agility. The low-level tracks the planned trajectory with a nonlinear Model Predictive Controller. The first part of the paper describes the proposed control architecture and methodology. The second part presents simulative and experimental results with an autonomous and semi-autonomous ground vehicle traveling at high speed on an icy surface. <|reference_end|>",
"<|reference_start|> Lp string stability of cascaded systems: Application to vehicle platooning: Nowadays, throughput has become a limiting factor in road transport. An effective means to increase the road throughput is to employ a small intervehicle time gap using automatic vehicle-following control systems. String stability, i.e., the disturbance attenuation along the vehicle string, is considered an essential requirement for the design of those systems. However, the formal notion of string stability is not unambiguous in literature, since both stability and performance interpretations exist. Therefore, a novel definition for string stability of nonlinear cascaded systems is proposed, using input-output properties. This definition is shown to result in well-known string stability conditions for linear cascaded systems. The theoretical results are experimentally validated using a platoon of six passenger vehicles equipped with cooperative adaptive cruise control. <|reference_end|>"
] | [
1,
4,
8,
28
] | {"<|cite_1|>": "ss-1119213", "<|cite_2|>": "arxiv-45471", "<|cite_3|>": "ss-998826", "<|cite_4|>": "ss-998827", "<|cite_5|>": "ss-689031", "<|cite_6|>": "ss-775769", "<|cite_7|>": "ss-816568", "<|cite_8|>": "ss-998828", "<|cite_9|>": "ss-1294548", "<|cite_10|>": "ss-775769", "<|cite_11|>": "ss-998829", "<|cite_12|>": "ss-998830", "<|cite_13|>": "ss-745493", "<|cite_15|>": "ss-689031", "<|cite_16|>": "ss-737690", "<|cite_17|>": "ss-1456583", "<|cite_18|>": "ss-998831", "<|cite_19|>": "ss-998832", "<|cite_20|>": "ss-998833", "<|cite_21|>": "ss-998829", "<|cite_22|>": "ss-998834", "<|cite_23|>": "ss-998835", "<|cite_24|>": "ss-998836", "<|cite_25|>": "ss-1279959", "<|cite_26|>": "ss-998837", "<|cite_27|>": "ss-998838", "<|cite_28|>": "ss-1119213", "<|cite_29|>": "ss-2484864", "<|cite_30|>": "ss-1284870", "<|cite_31|>": "ss-998839"} |
2111.03965 | <|paper_start|> Title: Tensor Deblurring and Denoising Using Total Variation
Abstract: Tensor Deblurring and Denoising Using Total Variation: We consider denoising and deblurring problems for tensors. While images can be discretized as matrices, the analogous procedure for color images or videos leads to a tensor formulation. We extend the classical ROF functional for variational denoising and deblurring to the tensor case by employing multi-dimensional total variation regularization. Furthermore, the resulting minimization problem is calculated by the FISTA method generalized to the tensor case. We provide some numerical experiments by applying the scheme to the denoising, the deblurring, and the recoloring of color images as well as to the deblurring of videos.
Introduction
Digital image restoration is an important task in image processing since it can be applied to various areas of applied sciences such as medical and astronomical imaging, film restoration, and image and video coding. There are already various methods
(most of them not tensor-based) to recover signal/images from noise and blurry observations,
for instance, statistical-based approaches <|cite_start|> (Reference: Linear and nonlinear image deblurring: A documented study: Nonlinear image deblurring procedures based on probabilistic considerations are widely believed to outperform conventional linear methods. This paper is exclusively concerned with nonsmooth images such as those that occur in biomedical imaging, where reconstruction of high frequency detail is of prime interest, and where avoidance of a priori smoothness constraints is a major concern. The theoretical basis behind each of the following nonlinear procedures is examined: the Lucy--Richardson method, the maximum likelihood E-M algorithm, the Poisson maximum a posteriori method, and the Nunez--Llacer version of the maximum entropy method. A linear iterative method, VanCittert's iteration, is also studied. It is shown that each of the first three methods, as well as VanCittert's method, lack a necessary ingredient for successful solution of the ill-posed deblurring problem, while in the maximum entropy method, the enforced smoothness may have adverse consequences in medical imaging. A direct linear method, the slow evolution from the continuation boundary (SECB) method, designed specifically for nonsmooth images, is also considered. That method is stabilized by constraining the blurring operator as well as the solution and does not require smoothness constraints. It is shown that useful error estimates can be obtained in the SECB method while this is impossible in Tikhonov's method without a priori bounds on derivatives of the unknown solution. Reconstruction experiments on low noise synthetic MRI data show that thousands of iterations are necessary to achieve sufficient resolution in the iterative procedures. However, the SECB method provides higher resolution at considerable savings in computer time. At high noise levels, the iterative algorithms are shown to diverge. At these same noise levels, the SECB method produces reconstructions comparable in quality to those that would be obtained in the iterative methods, were one able to terminate the divergent algorithm at that iteration which best approximates the true solution in the L1 norm.) <|cite_end|> <|cite_start|> (Reference: Image reconstruction and restoration: overview of common estimation structures and problems: Developments in the theory of image reconstruction and restoration over the past 20 or 30 years are outlined. Particular attention is paid to common estimation structures and to practical problems not properly solved yet. The problem of image reconstruction and restoration is first formulated. Some of the current regularization approaches used to solve the problem are then described. The concepts of a priori information and compound criterion are introduced. A Bayesian interpretation of the regularization techniques is given which clarifies the role of the tuning parameters and indicates how they could be estimated. The practical aspects of computing the solution, first when the hyperparameters are known and second when they must be estimated, are then considered. Conclusions are drawn, and points that still need to be investigated are outlined. >) <|cite_end|>,
methods employing Fourier and/or wavelet transforms <|cite_start|> (Reference: Wavelet-based deconvolution for ill-conditioned systems: In this paper, we propose a new approach to wavelet-based deconvolution. Roughly speaking, the algorithm comprises Fourier-domain system inversion followed by wavelet-domain noise suppression. Our approach subsumes a number of other wavelet-based deconvolution methods. In contrast to other wavelet-based approaches, however, we employ a regularized inverse filter, which allows the algorithm to operate even when the inverse system is ill-conditioned or non-invertible. Using a mean-square-error metric, we strike an optimal balance between Fourier-domain and wavelet-domain regularization. The result is a fast deconvolution algorithm ideally suited to signals and images with edges and other singularities. In simulations with real data, the algorithm outperforms the LTI Wiener filter and other wavelet-based deconvolution algorithms in terms of both visual quality and MSE performance.) <|cite_end|> <|cite_start|> (Reference: Digital image restoration: The article introduces digital image restoration to the reader who is just beginning in this field, and provides a review and analysis for the reader who may already be well-versed in image restoration. The perspective on the topic is one that comes primarily from work done in the field of signal processing. Thus, many of the techniques and works cited relate to classical signal processing approaches to estimation theory, filtering, and numerical analysis. In particular, the emphasis is placed primarily on digital image restoration algorithms that grow out of an area known as "regularized least squares" methods. It should be noted, however, that digital image restoration is a very broad field, as we discuss, and thus contains many other successful approaches that have been developed from different perspectives, such as optics, astronomy, and medical imaging, just to name a few. In the process of reviewing this topic, we address a number of very important issues in this field that are not typically discussed in the technical literature.) <|cite_end|>, or variational methods <|cite_start|> (Reference: Total variation based image restoration with free local constraints: A new total variation based approach was developed by Rudin, Osher and Fatemi (see Physica D., vol.60, p.259, 1992) to overcome the basic limitations of all smooth regularization algorithms. The TV-based technique use the L/sup 1/ norm of the magnitude of a gradient, thus making discontinuous and nonsmooth solutions possible. In TV image restoration, the solution is obtained by solving a time-dependent, nonlinear PDE on a manifold that satisfies the degradation constraints. In practical applications, one assumes a space-varying blurring kernel and signal-dependent (e.g. multiplicative) noise. The evolution part of the TV-based PDE turned out to be related to the curve shortening equation, but scaled by an inverse.<<ETX>>) <|cite_end|> <|cite_start|> (Reference: Image recovery via total variation minimization and related problems
: ) <|cite_end|>.
Among the variational methods,
total variation (TV) regularization is one of the most prominent examples. TV regularization, first introduced by Rudin, Osher, and Fatemi (ROF) in 1992 <|cite_start|> (Reference: Nonlinear total variation based noise removal algorithms: ) <|cite_end|>, has become one of the most used techniques in image processing and computer vision because it is known to remove noises while preserving sharp edges and boundaries. It has since evolved from an image denoising method into a more general technique applied to various inverse problems such as deblurring, blind deconvolution, and inpainting.
The results in this article are an extension to tensors of
the ROF methodology and the techniques describe by Teboulle and Beck in <|cite_start|> (Reference: Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems: This paper studies gradient-based schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for the constrained TV-based image deburring problem. To achieve this task, we combine an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast iterative shrinkage/thresholding algorithm (FISTA) we have recently introduced. The resulting gradient-based algorithm shares a remarkable simplicity together with a proven global rate of convergence which is significantly better than currently known gradient projections-based methods. Our results are applicable to both the anisotropic and isotropic discretized TV functionals. Initial numerical results demonstrate the viability and efficiency of the proposed algorithms on image deblurring problems with box constraints.) <|cite_end|>. Teboulle and Beck's approach was to minimize the TV-functional using gradient projection, proximal mappings and Nestorov acceleration, applied to the dual functional of ROF that was proposed by Chambolle in <|cite_start|> (Reference: An Algorithm for Total Variation Minimization and Applications: ) <|cite_end|> <|cite_start|> (Reference: Total Variation Minimization and a Class of Binary MRF Models: ) <|cite_end|>.
Here we extend these algorithms to multi-channel images and videos.
After discretization, these objects can be represented as
tensors. For simplicity, most formulas that we present are for third-order tensors. However, keep in mind that this work
can be extended to any n-th order tensor with little effort.
We provide some details for the total variation functional
in Section~\ref{Sec2}, describe the minimization algorithms
that we use in Section~\ref{sec:fista}, and
then present our numerical approach for the
denoising and deblurring for tensors using total variation in the subsequent sections.
We will provide some numerical results to prove the effectiveness of our approach using colored images and videos. <|paper_end|> | [
"<|reference_start|> Image reconstruction and restoration: overview of common estimation structures and problems: Developments in the theory of image reconstruction and restoration over the past 20 or 30 years are outlined. Particular attention is paid to common estimation structures and to practical problems not properly solved yet. The problem of image reconstruction and restoration is first formulated. Some of the current regularization approaches used to solve the problem are then described. The concepts of a priori information and compound criterion are introduced. A Bayesian interpretation of the regularization techniques is given which clarifies the role of the tuning parameters and indicates how they could be estimated. The practical aspects of computing the solution, first when the hyperparameters are known and second when they must be estimated, are then considered. Conclusions are drawn, and points that still need to be investigated are outlined. > <|reference_end|>",
"<|reference_start|> Digital image restoration: The article introduces digital image restoration to the reader who is just beginning in this field, and provides a review and analysis for the reader who may already be well-versed in image restoration. The perspective on the topic is one that comes primarily from work done in the field of signal processing. Thus, many of the techniques and works cited relate to classical signal processing approaches to estimation theory, filtering, and numerical analysis. In particular, the emphasis is placed primarily on digital image restoration algorithms that grow out of an area known as \"regularized least squares\" methods. It should be noted, however, that digital image restoration is a very broad field, as we discuss, and thus contains many other successful approaches that have been developed from different perspectives, such as optics, astronomy, and medical imaging, just to name a few. In the process of reviewing this topic, we address a number of very important issues in this field that are not typically discussed in the technical literature. <|reference_end|>",
"<|reference_start|> Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems: This paper studies gradient-based schemes for image denoising and deblurring problems based on the discretized total variation (TV) minimization model with constraints. We derive a fast algorithm for the constrained TV-based image deburring problem. To achieve this task, we combine an acceleration of the well known dual approach to the denoising problem with a novel monotone version of a fast iterative shrinkage/thresholding algorithm (FISTA) we have recently introduced. The resulting gradient-based algorithm shares a remarkable simplicity together with a proven global rate of convergence which is significantly better than currently known gradient projections-based methods. Our results are applicable to both the anisotropic and isotropic discretized TV functionals. Initial numerical results demonstrate the viability and efficiency of the proposed algorithms on image deblurring problems with box constraints. <|reference_end|>",
"<|reference_start|> Total Variation Minimization and a Class of Binary MRF Models: <|reference_end|>"
] | [
1,
3,
7,
9
] | {"<|multi_cite_1_1|>": "ss-1327958", "<|multi_cite_1_2|>": "ss-1485334", "<|multi_cite_2_1|>": "ss-1327959", "<|multi_cite_2_2|>": "ss-1294985", "<|multi_cite_3_1|>": "ss-788957", "<|multi_cite_3_2|>": "ss-2001062", "<|cite_4|>": "ss-700950", "<|cite_5|>": "ss-2277616", "<|multi_cite_6_1|>": "ss-1292208", "<|multi_cite_6_2|>": "ss-1327960"} |
2401.17472 | <|paper_start|> Title: Convergence of the deep BSDE method for stochastic control problems formulated through the stochastic maximum principle
Abstract: Convergence of the deep BSDE method for stochastic control problems formulated through the stochastic maximum principle: It is well-known that decision-making problems from stochastic control can be formulated by means of a forward-backward stochastic differential equation (FBSDE). Recently, the authors of Ji et al. 2022 proposed an efficient deep learning algorithm based on the stochastic maximum principle (SMP). In this paper, we provide a convergence result for this deep SMP-BSDE algorithm and compare its performance with other existing methods. In particular, by adopting a strategy as in Han and Long 2020, we derive a-posteriori estimate, and show that the total approximation error can be bounded by the value of the loss functional and the discretization error. We present numerical examples for high-dimensional stochastic control problems, both in case of drift- and diffusion control, which showcase superior performance compared to existing algorithms.
Introduction
\label{sec1}
Stochastic control theory is a powerful paradigm for modelling and analyzing decision-making problems that are subject to some random dynamics. Classical approaches for solving these kinds of problems include methods based on the dynamic programming principle (DP) <|cite_start|> (Reference: Dynamic Programming and Stochastic Control Processes: ) <|cite_end|>, the stochastic maximum principle (SMP) <|cite_start|> (Reference: {An Introductory Approach to Duality in Optimal Stochastic Control: The purpose of this paper is to compare the results which have been recently obtained in optimal stochastic control. Various maximum principles are shown to derive from a general Pontryagin princip...) <|cite_end|> <|cite_start|> (Reference: Mathematical Theory of Optimal Processes: ) <|cite_end|> and other techniques, see e.g. <|cite_start|> (Reference: Numerical Methods for Stochastic Control Problems in Continuous Time: H. J. Kushner and P. G. Dupuis. Springer-Verlag, New York/Heidelberg, May 1992. 439 pp., DM 98. ISBN 3-540-97834-8) <|cite_end|> <|cite_start|> (Reference: The Rate Base for Rate Regulation: ) <|cite_end|> <|cite_start|> (Reference: The Rate of Convergence of Finite-Difference Approximations for Parabolic Bellman Equations with Lipschitz Coefficients in Cylindrical Domains: ) <|cite_end|> <|cite_start|> (Reference: ON THE RATE OF CONVERGENCE OF APPROXIMATION SCHEMES FOR BELLMAN EQUATIONS ASSOCIATED WITH OPTIMAL STOPPING TIME PROBLEMS: We provide estimates on the rate of convergence for approximation schemes for Bellman equations associated with optimal stopping of controlled diffusion processes. These results extend (and slightly improve) the recent results by Barles & Jakobsen to the more difficult time-dependent case. The added difficulties are due to the presence of boundary conditions (initial conditions!) and the new structure of the equation which is now a parabolic variational inequality. The method presented is purely analytic and rather general and is based on earlier work by Krylov and Barles & Jakobsen. As applications we consider so-called control schemes based on the dynamic programming principle and finite difference methods (though not in the most general case). In the optimal stopping case these methods are similar to the Brennan & Schwartz scheme. A simple observation allows us to obtain the optimal rate 1/2 for the finite difference methods, and this is an improvement over previous results by Krylov and Barles & Jakobsen. Finally, we present an idea that allows us to improve all the above-mentioned results in the linear case. In particular, we are able to handle finite difference methods with variable diffusion coefficients without the reduction of order of convergence observed by Krylov in the nonlinear case.) <|cite_end|>. However, these approaches cannot easily handle high-dimensional problems and suffer from the “curse of dimensionality". A recent candidate solution technique for stochastic control problems is formed by deep learning-based approaches, due to their remarkable performance in high-dimensional settings. The paper <|cite_start|> (Reference: Deep Learning Approximation for Stochastic Control Problems: Many real world stochastic control problems suffer from the "curse of dimensionality". To overcome this difficulty, we develop a deep learning approach that directly solves high-dimensional stochastic control problems based on Monte-Carlo sampling. We approximate the time-dependent controls as feedforward neural networks and stack these networks together through model dynamics. The objective function for the control problem plays the role of the loss function for the deep neural network. We test this approach using examples from the areas of optimal trading and energy storage. Our results suggest that the algorithm presented here achieves satisfactory accuracy and at the same time, can handle rather high dimensional problems.) <|cite_end|> developed a deep learning algorithm that directly approximates the optimal control process by a neural network at each step in time, and by training a terminal loss functional for all time steps simultaneously. Similar approaches have been explored in the control theory community <|cite_start|> (Reference: Neural networks for control systems - A survey: ) <|cite_end|>, <|cite_start|> (Reference: Piecewise Affine Neural Networks and Nonlinear Control: ) <|cite_end|> and <|cite_start|> (Reference: A multilayered neural network controller: A multilayered neural network processor is used to control a given plant. Several learning architectures are proposed for training the neural controller to provide the appropriate inputs to the plant so that a desired response is obtained. A modified error-back propagation algorithm, based on propagation of the output error through the plant, is introduced. The properties of the proposed architectures are studied through a simulation example.<<ETX>>) <|cite_end|>, before the rise of computing power and machine learning.
Inspired by the remarkable performance in <|cite_start|> (Reference: Deep Learning Approximation for Stochastic Control Problems: Many real world stochastic control problems suffer from the "curse of dimensionality". To overcome this difficulty, we develop a deep learning approach that directly solves high-dimensional stochastic control problems based on Monte-Carlo sampling. We approximate the time-dependent controls as feedforward neural networks and stack these networks together through model dynamics. The objective function for the control problem plays the role of the loss function for the deep neural network. We test this approach using examples from the areas of optimal trading and energy storage. Our results suggest that the algorithm presented here achieves satisfactory accuracy and at the same time, can handle rather high dimensional problems.) <|cite_end|>, the research community has developed several neural network-based algorithms for stochastic control problems, see e.g. <|cite_start|> (Reference: Deep Neural Networks Algorithms for Stochastic Control Problems on Finite Horizon: Numerical Applications: ) <|cite_end|> <|cite_start|> (Reference: Deep neural networks algorithms for stochastic control problems on finite horizon: Convergence analysis: This paper develops algorithms for high-dimensional stochastic control problems based on deep learning and dynamic programming. Unlike classical approximate dynamic programming approaches, we first approximate the optimal policy by means of neural networks in the spirit of deep reinforcement learning, and then the value function by Monte Carlo regression. This is achieved in the dynamic programming recursion by performance or hybrid iteration, and regress now methods from numerical probabilities. We provide a theoretical justification of these algorithms. Consistency and rate of convergence for the control and value function estimates are analyzed and expressed in terms of the universal approximation error of the neural networks, and of the statistical error when estimating network function, leaving aside the optimization error. Numerical results on various applications are presented in a companion paper (arxiv.org/abs/1812.05916) and illustrate the performance of the proposed algorithms.) <|cite_end|> <|cite_start|> (Reference: Derivatives of Feynman–Kac Semigroups: ) <|cite_end|>. Many of these algorithms build upon deriving a forward-backward stochastic differential equation (FBSDE), associated to the control problem. Pioneered by the well-known deep BSDE method, initially proposed by <|cite_start|> (Reference: Solving high-dimensional partial differential equations using deep learning: Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as the "curse of dimensionality". This paper introduces a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Numerical results on examples including the nonlinear Black-Scholes equation, the Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that the proposed algorithm is quite effective in high dimensions, in terms of both accuracy and cost. This opens up new possibilities in economics, finance, operational research, and physics, by considering all participating agents, assets, resources, or particles together at the same time, instead of making ad hoc assumptions on their inter-relationships.) <|cite_end|> and later extended by <|cite_start|> (Reference: Convergence of the deep BSDE method for coupled FBSDEs: ) <|cite_end|> to the coupled setting, several solution approaches have been proposed, showing outstanding empirical performance in high-dimensional frameworks.
These methods derive the FBSDE through a stochastic representation of the solution of the Hamilton-Jacobi-Bellman (HJB) equation, motivated by the non-linear Feyman-Kac formula. However, such techniques become infeasible when the diffusion of the state process is also controlled, as they do not solve for the value function's second derivative, which is necessary to compute the optimal (diffusion) control.
As a remedy, enabling diffusion control, the authors in <|cite_start|> (Reference: Solving stochastic optimal control problem via stochastic maximum principle with deep learning method: In this paper, we aim to solve the high dimensional stochastic optimal control problem from the view of the stochastic maximum principle via deep learning. By introducing the extended Hamiltonian system which is essentially an FBSDE with a maximum condition, we reformulate the original control problem as a new one. Three algorithms are proposed to solve the new control problem. Numerical results for different examples demonstrate the effectiveness of our proposed algorithms, especially in high dimensional cases. And an important application of this method is to calculate the sub-linear expectations, which correspond to a kind of fully nonlinear PDEs.) <|cite_end|> proposed a deep BSDE algorithm where the associated FBSDE is derived from the stochastic maximum principle (SMP), which we call the deep SMP-BSDE. Other SMP-based algorithms include <|cite_start|> (Reference: {Deep Learning Methods for Mean Field Control Problems With Delay: We consider a general class of mean field control problems described by stochastic delayed differential equations of McKean–Vlasov type. Two numerical algorithms are provided based on deep learning techniques, one is to directly parameterize the optimal control using neural networks, the other is based on numerically solving the McKean–Vlasov forward anticipated backward stochastic differential equation (MV-FABSDE) system. In addition, we establish the necessary and sufficient stochastic maximum principle of this class of mean field control problems with delay based on the differential calculus on function of measures, and the existence and uniqueness results are proved for the associated MV-FABSDE system under suitable conditions. Mathematical Subject Classification (2000): 93E20, 60G99, 68-04) <|cite_end|> <|cite_start|> (Reference: Convergence Analysis of Machine Learning Algorithms for the Numerical Solution of Mean Field Control and Games: I - The Ergodic Case: We propose two algorithms for the solution of the optimal control of ergodic McKean-Vlasov dynamics. Both algorithms are based on the approximation of the theoretical solutions by neural networks, the latter being characterized by their architecture and a set of parameters. This allows the use of modern machine learning tools, and efficient implementations of stochastic gradient descent. The first algorithm is based on the idiosyncrasies of the ergodic optimal control problem. We provide a mathematical proof of the convergence of the algorithm, and we analyze rigorously the approximation by controlling the different sources of error. The second method is an adaptation of the deep Galerkin method to the system of partial differential equations issued from the optimality condition. We demonstrate the efficiency of these algorithms on several numerical examples, some of them being chosen to show that our algorithms succeed where existing ones failed. We also argue that both methods can easily be applied to problems in dimensions larger than what can be found in the existing literature. Finally, we illustrate the fact that, although the first algorithm is specifically designed for mean field control problems, the second one is more general and can also be applied to the partial differential equation systems arising in the theory of mean field games.) <|cite_end|> in the context of mean-field control and mean-field games. We refer to <|cite_start|> (Reference: Recent Developments in Machine Learning Methods for Stochastic Control and Games: Stochastic optimal control and games have a wide range of applications, from finance and economics to social sciences, robotics, and energy management. Many real-world applications involve complex models that have driven the development of sophisticated numerical methods. Recently, computational methods based on machine learning have been developed for solving stochastic control problems and games. In this review, we focus on deep learning methods that have unlocked the possibility of solving such problems, even in high dimensions or when the structure is very complex, beyond what traditional numerical methods can achieve. We consider mostly the continuous time and continuous space setting. Many of the new approaches build on recent neural-network-based methods for solving high-dimensional partial differential equations or backward stochastic differential equations, or on model-free reinforcement learning for Markov decision processes that have led to breakthrough results. This paper provides an introduction to these methods and summarizes the state-of-the-art works at the crossroad of machine learning and stochastic control and games.) <|cite_end|> and <|cite_start|> (Reference: Neural networks-based algorithms for stochastic control and PDEs in finance: This paper presents machine learning techniques and deep reinforcement learningbased algorithms for the efficient resolution of nonlinear partial differential equations and dynamic optimization problems arising in investment decisions and derivative pricing in financial engineering. We survey recent results in the literature, present new developments, notably in the fully nonlinear case, and compare the different schemes illustrated by numerical tests on various financial applications. We conclude by highlighting some future research directions.) <|cite_end|> for a detailed overview of deep learning algorithms for stochastic control problems.
Furthermore, the authors in <|cite_start|> (Reference: Convergence of a robust deep FBSDE method for stochastic control: In this paper, we propose a deep learning based numerical scheme for strongly coupled FBSDEs, stemming from stochastic control. It is a modification of the deep BSDE method in which the initial value to the backward equation is not a free parameter, and with a new loss function being the weighted sum of the cost of the control problem, and a variance term which coincides with the mean squared error in the terminal condition. We show by a numerical example that a direct extension of the classical deep BSDE method to FBSDEs, fails for a simple linear-quadratic control problem, and motivate why the new method works. Under regularity and boundedness assumptions on the exact controls of time continuous and time discrete control problems, we provide an error analysis for our method. We show empirically that the method converges for three different problems, one being the one that failed for a direct extension of the deep BSDE method.) <|cite_end|> found that deep BSDE methods may fail to converge for FBSDEs stemming from stochastic control problems via DP, due to local minima. They proposed a robust counterpart by adding a regularization component to the loss function which resolved this issue in the case of drift control.
Despite the fact that there are many studies concerning stochastic control, only a few theoretical derivations are available regarding the convergence of machine learning-based approaches for FBSDEs stemming from stochastic control. Theoretical work in this direction includes the study of convergence of the deep BSDE method by <|cite_start|> (Reference: Convergence of the deep BSDE method for coupled FBSDEs: ) <|cite_end|>, which provides a posteriori error estimate. The authors of <|cite_start|> (Reference: Convergence of a robust deep FBSDE method for stochastic control: In this paper, we propose a deep learning based numerical scheme for strongly coupled FBSDEs, stemming from stochastic control. It is a modification of the deep BSDE method in which the initial value to the backward equation is not a free parameter, and with a new loss function being the weighted sum of the cost of the control problem, and a variance term which coincides with the mean squared error in the terminal condition. We show by a numerical example that a direct extension of the classical deep BSDE method to FBSDEs, fails for a simple linear-quadratic control problem, and motivate why the new method works. Under regularity and boundedness assumptions on the exact controls of time continuous and time discrete control problems, we provide an error analysis for our method. We show empirically that the method converges for three different problems, one being the one that failed for a direct extension of the deep BSDE method.) <|cite_end|> prove the convergence of a robust deep BSDE method by exploiting the special structure of the FBSDE. For works in the decoupled framework see e.g. <|cite_start|> (Reference: Deep backward schemes for high-dimensional nonlinear PDEs: We propose new machine learning schemes for solving high dimensional nonlinear partial differential equations (PDEs). Relying on the classical backward stochastic differential equation (BSDE) representation of PDEs, our algorithms estimate simultaneously the solution and its gradient by deep neural networks. These approximations are performed at each time step from the minimization of loss functions defined recursively by backward induction. The methodology is extended to variational inequalities arising in optimal stopping problems. We analyze the convergence of the deep learning schemes and provide error estimates in terms of the universal approximation of neural networks. Numerical results show that our algorithms give very good results till dimension 50 (and certainly above), for both PDEs and variational inequalities problems. For the PDEs resolution, our results are very similar to those obtained by the recent method in \cite{weinan2017deep} when the latter converges to the right solution or does not diverge. Numerical tests indicate that the proposed methods are not stuck in poor local minimaas it can be the case with the algorithm designed in \cite{weinan2017deep}, and no divergence is experienced. The only limitation seems to be due to the inability of the considered deep neural networks to represent a solution with a too complex structure in high dimension.) <|cite_end|>, <|cite_start|> (Reference: The One Step Malliavin scheme: new discretization of BSDEs implemented with deep learning regressions: A novel discretization is presented for decoupled forward–backward stochastic differential equations (FBSDE) with differentiable coefficients, simultaneously solving the BSDE and its Malliavin sensitivity problem. The control process is estimated by the corresponding linear BSDE driving the trajectories of the Malliavin derivatives of the solution pair, which implies the need to provide accurate $\varGamma $ estimates. The approximation is based on a merged formulation given by the Feynman–Kac formulae and the Malliavin chain rule. The continuous time dynamics is discretized with a theta-scheme. In order to allow for an efficient numerical solution of the arising semidiscrete conditional expectations in possibly high dimensions, it is fundamental that the chosen approach admits to differentiable estimates. Two fully-implementable schemes are considered: the BCOS method as a reference in the one-dimensional framework and neural network Monte Carlo regressions in case of high-dimensional problems, similarly to the recently emerging class of Deep BSDE methods (Han et al. (2018 Solving high-dimensional partial differential equations using deep learning. Proc. Natl. Acad. Sci., 115, 8505–8510); Huré et al. (2020 Deep backward schemes for high-dimensional nonlinear PDEs. Math. Comp., 89, 1547–1579)). An error analysis is carried out to show $\mathbb{L}^2$ convergence of order $1/2$, under standard Lipschitz assumptions and additive noise in the forward diffusion. Numerical experiments are provided for a range of different semilinear equations up to $50$ dimensions, demonstrating that the proposed scheme yields a significant improvement in the control estimations.) <|cite_end|> and <|cite_start|> (Reference: Convergence of the Backward Deep BSDE Method with Applications to Optimal Stopping Problems: The optimal stopping problem is one of the core problems in financial markets, with broad applications such as pricing American and Bermudan options. The deep BSDE method [Han, Jentzen and E, PNAS, 115(34):8505-8510, 2018] has shown great power in solving high-dimensional forward-backward stochastic differential equations (FBSDEs), and inspired many applications. However, the method solves backward stochastic differential equations (BSDEs) in a forward manner, which can not be used for optimal stopping problems that in general require running BSDE backwardly. To overcome this difficulty, a recent paper [Wang, Chen, Sudjianto, Liu and Shen, arXiv:1807.06622, 2018] proposed the backward deep BSDE method to solve the optimal stopping problem. In this paper, we provide the rigorous theory for the backward deep BSDE method. Specifically, 1. We derive the a posteriori error estimation, i.e., the error of the numerical solution can be bounded by the training loss function; and; 2. We give an upper bound of the loss function, which can be sufficiently small subject to universal approximations. We give two numerical examples, which present consistent performance with the proved theory.) <|cite_end|>.
In this paper, we provide convergence results for the deep SMP-BSDE algorithm and compare the method with existing algorithms supported by numerical results. Unlike in <|cite_start|> (Reference: Solving high-dimensional partial differential equations using deep learning: Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as the "curse of dimensionality". This paper introduces a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Numerical results on examples including the nonlinear Black-Scholes equation, the Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that the proposed algorithm is quite effective in high dimensions, in terms of both accuracy and cost. This opens up new possibilities in economics, finance, operational research, and physics, by considering all participating agents, assets, resources, or particles together at the same time, instead of making ad hoc assumptions on their inter-relationships.) <|cite_end|> <|cite_start|> (Reference: Convergence of the deep BSDE method for coupled FBSDEs: ) <|cite_end|> and <|cite_start|> (Reference: Convergence of a robust deep FBSDE method for stochastic control: In this paper, we propose a deep learning based numerical scheme for strongly coupled FBSDEs, stemming from stochastic control. It is a modification of the deep BSDE method in which the initial value to the backward equation is not a free parameter, and with a new loss function being the weighted sum of the cost of the control problem, and a variance term which coincides with the mean squared error in the terminal condition. We show by a numerical example that a direct extension of the classical deep BSDE method to FBSDEs, fails for a simple linear-quadratic control problem, and motivate why the new method works. Under regularity and boundedness assumptions on the exact controls of time continuous and time discrete control problems, we provide an error analysis for our method. We show empirically that the method converges for three different problems, one being the one that failed for a direct extension of the deep BSDE method.) <|cite_end|>, we consider FBSDEs that come from the SMP, instead of the HJB equation, and therefore the results and standard estimates for FBSDE stemming from DP can not be directly applied. Nevertheless, with some extra effort, we are able to adopt a similar strategy as in <|cite_start|> (Reference: Convergence of the deep BSDE method for coupled FBSDEs: ) <|cite_end|>, and derive a posteriori error estimate for the numerical solutions of the deep SMP-BSDE algorithm. By this, we are able to tackle diffusion control problems.
This paper is organized as follows: In Section \ref{sec2}, we briefly review the theoretical foundations related to the SMP. In Section \ref{sec3}, we formulate the deep SMP-BSDE algorithm for diffusion control problems. We carry out a convergence analysis in Section \ref{sec4}, and, in particular, a posteriori error estimate is derived for the deep SMP-BSDE algorithm. In Section \ref{sec5}, we demonstrate the performance of the algorithm through numerical examples both in the case of drift- and diffusion control.
Related Work
\label{sec2}
In this section, we review some basic results from stochastic control theory and show how to reformulate a stochastic control problem into an FBSDE through the SMP.
Let $\left(\Omega, \mathcal{F},\left\{\mathcal{F}_t\right\}_{0 \leq t \leq T}, \mathbb{P}\right)$ be a filtered probability space, supporting an $m$-dimensional Brownian motion $W$ and its natural filtration $\mathcal{F}=\left\{\mathcal{F}_t\right\}_{0 \leq t \leq T}$, augmented by all $\mathbb{P}$-null sets. Fixing $0<T<\infty$, we consider the following finite horizon stochastic control problem
\begin{equation}\label{stochastic_control_problem}
\left\{
\begin{aligned}
& \inf_{u\in \mathcal{U}[0, T]} J\left( 0, x_0 ; u(\cdot) \right) := \inf_{u\in \mathcal{U}[0, T]} \mathbb{E} \left( \int_{0}^{T} \Bar{f}(s, X_s, u_s) \,ds + g(X_T) \right), \\
& X_t = x_0 + \int_0^t \Bar{b}(s, X_s, u_s) \,ds + \int_0^t \Bar{\sigma}(s, X_s, u_s) \,dW_s, \quad t \in[0, T],
\end{aligned}
\right.
\end{equation}
where $\bar{b}: [0, T]\times \R^{d}\times \R^\ell\to \R^d$, $\bar{\sigma}: [0, T]\times \R^d\times \R^\ell\to \R^{d\times m}$, $\bar{f}: [0, T]\times \R^d\times\R^\ell\to \R$ and $g: \R^d\to\R$ are deterministic functions, and $X_t, u_t$ are $\R^d, \R^\ell$-valued stochastic processes, respectively. The set of admissible controls, $\mathcal{U}[0,T]$, is defined as
$$
\mathcal{U}[0, T] \coloneqq \left\{u:[0, T] \times \Omega \rightarrow U \mid u \in L_{\mathcal{F}}^2\left(0, T ; \mathbb{R}^\ell \right)\right\},
$$
with
$$
L_{\mathcal{F}}^2\left(0, T ; \mathbb{R}^\ell \right)
\coloneqq \left\{x:[0, T] \times \Omega \rightarrow \mathbb{R}^\ell
\mid x \text{ is } \mathcal{F} \text {-adapted and } E \left[\int_0^T\left\|x_t\right\|^2 \,dt\right]<\infty\right\},
$$
where we shall denote $\| \cdot \|$ for both the usual Euclidean norm and the Frobenius norm for matrices. We assume that the control domain $U$ is a convex body in $\mathbb{R}^\ell$.
Any process $u_t \in \mathcal{U}[0, T]$ is called an admissible control of \eqref{stochastic_control_problem}, and the $(X_t, u_t)$ consisting of the corresponding state process is called an admissible pair. Furthermore, $(X_t, u_t)$ is an optimal pair whenever the infimum of \eqref{stochastic_control_problem} is achieved, and accordingly, we define the value function $V(\cdot, \cdot)$ of \eqref{stochastic_control_problem} as follows
\begin{equation}\label{def:value_function}
\left\{
\begin{aligned}
& V(t, x) \coloneqq \inf_{u \in \mathcal{U}[t, T]} J(t, x ; u(\cdot)), \quad \forall(t, x) \in[0, T) \times \mathbb{R}^d,\\
& V(T, y) \coloneqq g(y), \quad \forall y \in \mathbb{R}^d,
\end{aligned}
\right.
\end{equation}
and denote its partial derivatives as $V_x\coloneqq \partial_x V(t, x)$, $V_{xx}\coloneqq \partial_{xx} V(t, x)$ and $\partial_t V(t, x)$ without specifying their dependencies in $(t,x)$.
\begin{remark} We shall further distinguish two important classes of problem \eqref{stochastic_control_problem}. We call \eqref{stochastic_control_problem} a drift control problem if the drift coefficient depends on the control $u_t$ but not the diffusion coefficient, and it is called a diffusion control problem if the diffusion coefficient also includes $u_t$ as an argument.
\end{remark}
Associated with the stochastic control problem \eqref{stochastic_control_problem}, we introduce the adjoint equation, as follows
\begin{equation}\label{eq:adjoint}
\left\{
\begin{aligned}
& P_t = P_0 - \int_0^t \nabla_x \Bar{H}(s, X_s, u_s, P_s, Q_s) \,ds + \int_0^t Q_s\, d W_s, \quad t \in[0, T] \\
& P_T = - \nabla_x g(X_T),
\end{aligned}
\right.
\end{equation}
where the $\nabla_x \Bar{H}$ is the derivative of Hamiltonian $ \Bar{H}$, which is defined by
\begin{equation}\label{def:Ham}
\begin{aligned}
& \Bar{H}(t, x, u, p, q) := p^\top \Bar{b}(t, x, u) + \operatorname{Tr}\left(q^{\top} \Bar{\sigma}(t, x, u)\right) - \Bar{f}(t, x, u),\\
&\quad \forall (t, x, u, p, q) \in[0, T] \times \mathbb{R}^d \times U \times \mathbb{R}^d \times \mathbb{R}^{d \times m}.
\end{aligned}
\end{equation}
Equation \eqref{eq:adjoint} is a BSDE whose solution is formed by a pair of processes, $(P(\cdot), Q(\cdot)) \in L_{\mathcal{F}}^2\left(0, T ; \mathbb{R}^d\right) \times \left(L_{\mathcal{F}}^2\left(0, T ; \mathbb{R}^d\right)\right)^m$.
Concerning the wellposedness of BSDE \eqref{eq:adjoint}, we denote the maps $\bar{\varphi} \coloneqq \{\bar{b}, \bar{\sigma}, \bar{f}, g\}$, and state the following assumption first.
\begin{assumption}\label{assume_adjoint}\
The map $\bar{\varphi}$ is $C^2$ in $x$, and $\bar{\varphi}(t, 0, u)$ is bounded for any $u\in U$. Moreover, $\bar{\varphi}$, $\bar{\varphi}_x$ and $\bar{\varphi}_{xx}$ are uniformly Lipschitz in $x$ and $u$.
\end{assumption}
\begin{remark}\label{wellpose_adjoint} With Assumption \ref{assume_adjoint}, the adjoint equation, or BSDE \eqref{eq:adjoint}, admits a unique solution for every admissible pair $(X_t, u_t)$. However, there is only one of the admissible 4-tuples $(X_t, u_t, P_t, Q_t)$ that also minimizes the objective function in \eqref{stochastic_control_problem}. That is to say, to have an FBSDE that admits a unique solution without including the objective function, we need extra effort in the reformulation. For this purpose, we recall the SMP.
\end{remark}
\begin{theorem}[Stochastic maximum principle]\label{thm:SMP}
Let Assumption \ref{assume_adjoint} hold, and $\left( X_t^*, u_t^*, P_t^*, Q_t^* \right)$ be an admissible 4-tuple. Suppose that $g(\cdot)$ is convex, $\Bar{H}\left(t, \cdot, \cdot, P_t^*, Q_t^* \right)$ defined by (\ref{def:Ham}) is concave for all $t \in[0, T]$ $\mathbb{P}$ almost surely, and the maximum condition
\begin{equation}\label{maxcond}
\Bar{H} \left( X_t^*, u_t^*, P_t^*, Q_t^* \right) = \max_{u \in U} \Bar{H} \left( X_t^*, u, P_t^*, Q_t^* \right), \quad \text{a.e.} \ t \in[0, T], \quad \mathbb{P} \text{-a.s.}
\end{equation}
holds. Then, $\left( X_t^*, u_t^* \right)$ is an optimal pair of problem \eqref{stochastic_control_problem}.
\end{theorem}
\begin{remark}
The proof of this theorem is found in \cite[pp. 149-150]{pham2009continuous}, or, in a more general setting, \cite[pp. 138-140]{yong1999stochastic}. Theorem \ref{thm:SMP} provides a sufficient condition for the optimal control $u^*_t$, when certain concavity and convexity conditions hold, which are crucial in general, see for instance, \cite[Example 3.1, pp. 138-140]{yong1999stochastic}. On the other hand, Theorem \ref{thm:SMP} itself does not constitute a necessary condition unless there is no diffusion control in problem \eqref{stochastic_control_problem}. In the rest of this paper, we shall use superscript $*$ to indicate that a process is associated with the optimal control $u_t^*$.
\end{remark}
Under sufficient smoothness and concavity assumptions of $\bar{H}$ in Theorem \ref{thm:SMP}, the optimization \eqref{maxcond} is uniquely solved by the following first-order conditions,
\begin{align}\label{feedback:first-order-condition}
\nabla_u \bar{H}(t, X^*_t, u^*_t, P^*_t, Q^*_t)
= &(\nabla_u \bar{b}(t, X^*_t, u^*))^\top P^*_t \\
& + \nabla_u \operatorname{Tr}(\bar{\sigma}^\top(t, X^*_t, u^*_t) Q^*_t) - \nabla_u\bar{f}(t, X^*_t, u^*_t)=0.
\end{align}
Similar to algorithm 3 in <|cite_start|> (Reference: Solving stochastic optimal control problem via stochastic maximum principle with deep learning method: In this paper, we aim to solve the high dimensional stochastic optimal control problem from the view of the stochastic maximum principle via deep learning. By introducing the extended Hamiltonian system which is essentially an FBSDE with a maximum condition, we reformulate the original control problem as a new one. Three algorithms are proposed to solve the new control problem. Numerical results for different examples demonstrate the effectiveness of our proposed algorithms, especially in high dimensional cases. And an important application of this method is to calculate the sub-linear expectations, which correspond to a kind of fully nonlinear PDEs.) <|cite_end|>, we assume that the first-order conditions yields an explicit formula for the mapping $\mathcal{M}: (t, X_t^*, P_t^*, Q_t^*)\mapsto u_t^*$,
\begin{equation}\label{feedback}
u_t^* = \mathcal{M}(t, X^*_t, P^*_t, Q^*_t), \quad \forall t \in [0, T].
\end{equation}
We will call this function the \emph{feedback map}, and remark that for a rather wide range of interesting problems such an expression is available in closed-form.
Let us denote the composition $\varphi \coloneqq \bar{\varphi}(t, x, \mathcal{M}(t,x,p,q)) $ for $\bar{\varphi} \setminus \{g\} $, where we define $\varphi = \{b, \sigma, f\} \cup \{g\}$. Similarly, we define the function $\bar{F}(t,x,u,p,q) \coloneqq \nabla_x \bar{H}(t,x,u,p, q)$ and write
\begin{equation}\label{def:Ham_pq}
F(t, x, p, q)
\coloneqq \bar{F}(t,x, \mathcal{M}(t,x,u,p,q), p, q).
\end{equation}
With this in hand, we reformulate \eqref{eq:adjoint} and the controlled SDE of \eqref{stochastic_control_problem} as a fully-coupled FBSDE, for $t\in[0, T]$,
\begin{equation}\label{eq:FBSDE}
\left\{
\begin{aligned}
X_t &= x_0 + \int_0^t b(s, X_s, P_s, Q_s) \,ds + \int_0^t \sigma(s, X_s, P_s, Q_s) \,dW_s, \\
P_t &= P_0 - \int_0^t \nabla_x H(s, X_s, P_s, Q_s) \,ds + \int_0^t Q_s\, d W_s, \\
P_T &= - \nabla_x g(X_T).
\end{aligned}
\right.
\end{equation}
We call the BSDE part in \eqref{eq:FBSDE}, subject to its boundary condition, the SMP-BSDE.
\begin{remark}\label{remark:4} An important feature of \eqref{eq:FBSDE} is that its solution is equivalent to the solution of \eqref{stochastic_control_problem}. Let $(X^*_t, u^*_t)$ be an optimal pair obtained by \eqref{stochastic_control_problem}, then there exists a unique pair $(P^*_t, Q^*_t)$ which follows from Remark \ref{wellpose_adjoint}. Therefore, we have the optimal $(X^*_t, P^*_t, Q^*_t)$ that also solves \eqref{eq:FBSDE}. On the other hand, solving \eqref{eq:FBSDE} gives us $(\tilde{X}_t, \tilde{P}_t, \tilde{Q}_t )$, and we obtain $\tilde{u}_t$ by the feedback map \eqref{feedback}. Consequently, we must have $\tilde{u}_t = u^*_t$ and $(\tilde{X}_t, \tilde{P}_t, \tilde{Q}_t ) = (X^*_t, P^*_t, Q^*_t)$, due to the uniqueness of feedback map \eqref{feedback} and Theorem \ref{thm:SMP}.
\end{remark} <|paper_end|> | [
"<|reference_start|> Piecewise Affine Neural Networks and Nonlinear Control: <|reference_end|>",
"<|reference_start|> Deep neural networks algorithms for stochastic control problems on finite horizon: Convergence analysis: This paper develops algorithms for high-dimensional stochastic control problems based on deep learning and dynamic programming. Unlike classical approximate dynamic programming approaches, we first approximate the optimal policy by means of neural networks in the spirit of deep reinforcement learning, and then the value function by Monte Carlo regression. This is achieved in the dynamic programming recursion by performance or hybrid iteration, and regress now methods from numerical probabilities. We provide a theoretical justification of these algorithms. Consistency and rate of convergence for the control and value function estimates are analyzed and expressed in terms of the universal approximation error of the neural networks, and of the statistical error when estimating network function, leaving aside the optimization error. Numerical results on various applications are presented in a companion paper (arxiv.org/abs/1812.05916) and illustrate the performance of the proposed algorithms. <|reference_end|>",
"<|reference_start|> Convergence of a robust deep FBSDE method for stochastic control: In this paper, we propose a deep learning based numerical scheme for strongly coupled FBSDEs, stemming from stochastic control. It is a modification of the deep BSDE method in which the initial value to the backward equation is not a free parameter, and with a new loss function being the weighted sum of the cost of the control problem, and a variance term which coincides with the mean squared error in the terminal condition. We show by a numerical example that a direct extension of the classical deep BSDE method to FBSDEs, fails for a simple linear-quadratic control problem, and motivate why the new method works. Under regularity and boundedness assumptions on the exact controls of time continuous and time discrete control problems, we provide an error analysis for our method. We show empirically that the method converges for three different problems, one being the one that failed for a direct extension of the deep BSDE method. <|reference_end|>",
"<|reference_start|> Solving stochastic optimal control problem via stochastic maximum principle with deep learning method: In this paper, we aim to solve the high dimensional stochastic optimal control problem from the view of the stochastic maximum principle via deep learning. By introducing the extended Hamiltonian system which is essentially an FBSDE with a maximum condition, we reformulate the original control problem as a new one. Three algorithms are proposed to solve the new control problem. Numerical results for different examples demonstrate the effectiveness of our proposed algorithms, especially in high dimensional cases. And an important application of this method is to calculate the sub-linear expectations, which correspond to a kind of fully nonlinear PDEs. <|reference_end|>"
] | [
9,
13,
30,
32
] | {"<|cite_1|>": "ss-2310730", "<|cite_2|>": "ss-714670", "<|cite_3|>": "ss-1513962", "<|cite_4|>": "ss-1506589", "<|cite_5|>": "ss-1535027", "<|cite_6|>": "ss-1427186", "<|cite_7|>": "ss-1523453", "<|cite_8|>": "arxiv-110773", "<|cite_9|>": "ss-963568", "<|cite_10|>": "ss-2310731", "<|cite_11|>": "ss-2310732", "<|cite_12|>": "arxiv-110773", "<|cite_13|>": "ss-735680", "<|cite_14|>": "ss-1352240", "<|cite_15|>": "ss-811941", "<|cite_16|>": "arxiv-128820", "<|cite_17|>": "ss-1599871", "<|cite_18|>": "arxiv-276399", "<|cite_19|>": "ss-737596", "<|cite_20|>": "ss-1685191", "<|cite_21|>": "arxiv-489867", "<|cite_22|>": "ss-1246852", "<|cite_23|>": "ss-2310734", "<|cite_24|>": "ss-1599871", "<|cite_25|>": "ss-2310734", "<|cite_26|>": "arxiv-190309", "<|cite_27|>": "ss-2310735", "<|cite_28|>": "ss-2310736", "<|cite_29|>": "arxiv-128820", "<|cite_30|>": "ss-1599871", "<|cite_31|>": "ss-2310734", "<|cite_32|>": "ss-1599871", "<|cite_33|>": "arxiv-276399"} |
2303.10316 | <|paper_start|> Title: Zero-shot Sound Event Classification Using a Sound Attribute Vector with Global and Local Feature Learning
Abstract: Zero-shot Sound Event Classification Using a Sound Attribute Vector with Global and Local Feature Learning: This paper introduces a zero-shot sound event classification (ZS-SEC) method to identify sound events that have never occurred in training data. In our previous work, we proposed a ZS-SEC method using sound attribute vectors (SAVs), where a deep neural network model infers attribute information that describes the sound of an event class instead of inferring its class label directly. Our previous method showed that it could classify unseen events to some extent; however, the accuracy for unseen events was far inferior to that for seen events. In this paper, we propose a new ZS-SEC method that can learn discriminative global features and local features simultaneously to enhance SAV-based ZS-SEC. In the proposed method, while the global features are learned in order to discriminate the event classes in the training data, the spectro-temporal local features are learned in order to regress the attribute information using attribute prototypes. The experimental results show that our proposed method can improve the accuracy of SAV-based ZS-SEC and can visualize the region in the spectrogram related to each attribute.
Introduction
\label{sec:intro}
Sound event classification (SEC) is a task in which we classify active sound events in a recording, such as the sound of running water, footsteps, or a moving car.
It is expected to be applied in uses related to the care of the elderly and babies <|cite_start|> (Reference: Robust sound recognition applied to awareness for health/children/elderly care: This paper presented a robust sound recognition work applied to awareness for health/children/elderly care. Specific sound awareness services can be activated based on recognized sound classes for detecting human activities as health care. To attain this goal, this study developed key technologies as follows: 1) SNR-aware subspace signal enhancement, 2) pitch and power density-based sound/speech discrimination, 3) HMM-based speech recognition, 4) sound recognition with ICA-transformed MFCCs feature and frame-based multiclass SVMs. Each classified sound event is response to human with predefined processes as sound awareness info. Simulations and an experiment are given to illustrate the performance of the proposed robust sound recognition system in a real-world home environment, Aspire Home, NCKU. The overall average resulting accuracy rate was approximately 90.97%.) <|cite_end|> <|cite_start|> (Reference: Healthcare audio event classification using Hidden Markov Models and Hierarchical Hidden Markov Models: Audio is a useful modality complement to video for healthcare monitoring. In this paper, we investigate the use of Hierarchical Hidden Markov Models (HHMMs) for healthcare audio event classification. We show that HHMM can handle audio events with recursive patterns to improve the classification performance. We also propose a model fusion method to cover large variations often existing in healthcare audio events. Experimental results from classifying key eldercare audio events show the effectiveness of the model fusion method for healthcare audio event classification.) <|cite_end|>, machine anomaly detection <|cite_start|> (Reference: Cognitive acoustic analytics service for Internet of Things: The rapid development of the Internet of Things (IoT) has brought great changes for non-contact and non-destructive sensing and diagnosis. For every inanimate object can tell us something by the sound it makes, acoustic sensor demonstrates great advantages comparing to conventional electronic and mechanic sensors in such cases: overcoming environmental obstacles, mapping to existing use cases of detecting problems with human ears, low cost for deployment, etc. It could be widely applied to various domains, such as predictive maintenance of machinery, robot sensory, elderly and baby care in smart home, etc. Whether we can use the acoustic sensor data to understand what is happening and to predict what will happen relies heavily on the analytics capabilities we apply to the acoustic data, which has to overcome the obstacles of noise, disturbance and errors, and has to meet the requirement of real-time processing of high volume signals with large number of sensors. In this paper, we propose a scalable cognitive acoustics analytics service for IoT that provides the user an incremental learning approach to evolve their analytics capability on non-intuitive and unstructured acoustic data through the combination of acoustic signal processing and machine learning technology. It first performs acoustic signal processing and denoising, enables acoustic signal based abnormal detection based on sound intensity, spectral centroid, etc. Then based on the accumulated abnormal data, a supervised learning method is performed as baseline and a neural network based classifier is used to recognize acoustic events in different scenarios with various volume of sample data and requirement of accuracy. In addition, acoustic sensor arrays processing is supported for localization of moving acoustic source in more complex scenario. In this paper, we designed a hybrid computing structure. Finally, we conduct experiments on acoustic event recognition for machinery diagnosis, and show that the proposed system can achieve high accuracy.) <|cite_end|> <|cite_start|> (Reference: Unsupervised Detection of Anomalous Sound Based on Deep Learning and the Neyman–Pearson Lemma: This paper proposes a novel optimization principle and its implementation for unsupervised anomaly detection in sound (ADS) using an autoencoder (AE). The goal of the unsupervised-ADS is to detect unknown anomalous sounds without training data of anomalous sounds. The use of an AE as a normal model is a state-of-the-art technique for the unsupervised-ADS. To decrease the false positive rate (FPR), the AE is trained to minimize the reconstruction error of normal sounds, and the anomaly score is calculated as the reconstruction error of the observed sound. Unfortunately, since this training procedure does not take into account the anomaly score for anomalous sounds, the true positive rate (TPR) does not necessarily increase. In this study, we define an objective function based on the Neyman–Pearson lemma by considering the ADS as a statistical hypothesis test. The proposed objective function trains the AE to maximize the TPR under an arbitrary low FPR condition. To calculate the TPR in the objective function, we consider that the set of anomalous sounds is the complementary set of normal sounds and simulate anomalous sounds by using a rejection sampling algorithm. Through experiments using synthetic data, we found that the proposed method improved the performance measures of the ADS under low FPR conditions. In addition, we confirmed that the proposed method could detect anomalous sounds in real environments.) <|cite_end|> <|cite_start|> (Reference: Acoustic Anomaly Detection for Machine Sounds based on Image Transfer Learning: In industrial applications, the early detection of malfunctioning factory machinery is crucial. In this paper, we consider acoustic malfunction detection via transfer learning. Contrary to the majority of current approaches which are based on deep autoencoders, we propose to extract features using neural networks that were pretrained on the task of image classification. We then use these features to train a variety of anomaly detection models and show that this improves results compared to convolutional autoencoders in recordings of four different factory machines in noisy environments. Moreover, we find that features extracted from ResNet based networks yield better results than those from AlexNet and Squeezenet. In our setting, Gaussian Mixture Models and One-Class Support Vector Machines achieve the best anomaly detection performance.) <|cite_end|> <|cite_start|> (Reference: Description and Discussion on DCASE 2022 Challenge Task 2: Unsupervised Anomalous Sound Detection for Machine Condition Monitoring Applying Domain Generalization Techniques: We present the task description and discussion on the results of the DCASE 2022 Challenge Task 2: “Unsupervised anomalous sound detection (ASD) for machine condition monitoring applying domain generalization techniques”. Domain shifts are a critical problem for the application of ASD systems. Because domain shifts can change the acoustic characteristics of data, a model trained in a source domain performs poorly for a target domain. In DCASE 2021 Challenge Task 2, we organized an ASD task for handling domain shifts. In this task, it was assumed that the occurrences of domain shifts are known. However, in practice, the domain of each sample may not be given, and the domain shifts can occur implicitly. In 2022 Task 2, we focus on domain generalization techniques that detects anomalies regardless of the domain shifts. Specifically, the domain of each sample is not given in the test data and only one threshold is allowed for all domains. Analysis of 81 submissions from 31 teams revealed two remarkable types of domain generalization techniques: 1) domain-mixing-based approach that obtains generalized representations and 2) domain-classification-based approach that explicitly or implicitly classifies different domains to improve detection performance for each domain.) <|cite_end|>, and so on.
Recently, with the great strides made in the development of deep learning technology, it is becoming possible to analyze various sounds such as environmental sounds <|cite_start|> (Reference: Deep neural networks for automatic detection of screams and shouted speech in subway trains: Deep Neural Networks (DNNs) have recently become a popular technique for regression and classification problems. Their capacity to learn high-order correlations between input and output data proves to be very powerful for automatic speech recognition. In this paper we investigate the use of DNNs for automatic scream and shouted speech detection, within the framework of surveillance systems in public transportation. We recorded a database of sounds occurring in subway trains in real conditions of exploitation and used DNNs to classify the sounds into screams, shouts and other categories. We report encouraging results, given the difficulty of the task, especially when a high level of surrounding noise is present.) <|cite_end|> <|cite_start|> (Reference: Sound Event Detection Utilizing Graph Laplacian Regularization with Event Co-occurrence: A limited number of types of sound event occur in an acoustic scene and some sound events tend to co-occur in the scene; for example, the sound events "dishes" and "glass jingling" are likely to co-occur in the acoustic scene "cooking". In this paper, we propose a method of sound event detection using graph Laplacian regularization with sound event co-occurrence taken into account. In the proposed method, the occurrences of sound events are expressed as a graph whose nodes indicate the frequencies of event occurrence and whose edges indicate the sound event co-occurrences. This graph representation is then utilized for the model training of sound event detection, which is optimized under an objective function with a regularization term considering the graph structure of sound event occurrence and co-occurrence. Evaluation experiments using the TUT Sound Events 2016 and 2017 detasets, and the TUT Acoustic Scenes 2016 dataset show that the proposed method improves the performance of sound event detection by 7.9 percentage points compared with the conventional CNN-BiGRU-based detection method in terms of the segment-based F1 score. In particular, the experimental results indicate that the proposed method enables the detection of co-occurring sound events more accurately than the conventional method.) <|cite_end|>.
However, because a large amount of labeled data is required to train a deep learning model,
it is a problem for SEC tasks that training data is difficult to come by for some events.
For example, it is difficult to correct anomaly data in anomaly event detection because the given anomaly event rarely occurs.
To overcome the problem of data scarcity, few-shot SEC methods have been proposed to classify sound events with only a few samples <|cite_start|> (Reference: Learning to match transient sound events using attentional similarity for few-shot sound recognition: In this paper, we introduce a novel attentional similarity module for the problem of few-shot sound recognition. Given a few examples of an unseen sound event, a classifier must be quickly adapted to recognize the new sound event without much fine-tuning. The proposed attentional similarity module can be plugged into any metric-based learning method for few-shot learning, allowing the resulting model to especially match related short sound events. Extensive experiments on two datasets shows that the proposed module consistently improves the performance of five different metric-based learning methods for few-shot sound recognition. The relative improvement ranges from +4.1% to +7.7% for 5-shot 5-way accuracy for the ESC-50 dataset, and from +2.1% to +6.5% for noiseESC-50. Qualitative results demonstrate that our method contributes in particular to the recognition of transient sound events.) <|cite_end|> <|cite_start|> (Reference: Few-shot sound event detection: Locating perceptually similar sound events within a continuous recording is a common task for various audio applications. However, current tools require users to manually listen to and label all the locations of the sound events of interest, which is tedious and time-consuming. In this work, we (1) adapt state-of-the-art metric-based few-shot learning methods to automate the detection of similar-sounding events, requiring only one or few examples of the target event, (2) develop a method to automatically construct a partial set of labeled examples (negative samples) to reduce user labeling effort, and (3) develop an inference-time data augmentation method to increase detection accuracy. To validate our approach, we perform extensive comparative analysis of few-shot learning methods for the task of keyword detection in speech. We show that our approach successfully adapts closed-set few-shot learning approaches to an open-set sound event detection problem.) <|cite_end|> <|cite_start|> (Reference: A Mutual learning framework for Few-shot Sound Event Detection: Although prototypical network (ProtoNet) has proved to be an effective method for few-shot sound event detection, two problems still exist. Firstly, the small-scaled support set is insufficient so that the class prototypes may not represent the class center accurately. Secondly, the feature extractor is task-agnostic (or class-agnostic): the feature extractor is trained with base-class data and directly applied to unseen-class data. To address these issues, we present a novel mutual learning framework with transductive learning, which aims at iteratively updating the class prototypes and feature extractor. More specifically, we propose to update class prototypes with transductive inference to make the class prototypes as close to the true class center as possible. To make the feature extractor to be task-specific, we propose to use the updated class prototypes to fine-tune the feature extractor. After that, a fine-tuned feature extractor further helps produce better class prototypes. Our method achieves the F-score of 38.4$\%$ on the DCASE 2021 Task 5 evaluation set, which won the first place in the few-shot bioacoustic event detection task of Detection and Classification of Acoustic Scenes and Events (DCASE) 2021 Challenge.) <|cite_end|>.
Chou \textit{et al.} propose an attentional similarity module to match transient sound events for few-shot SEC <|cite_start|> (Reference: Learning to match transient sound events using attentional similarity for few-shot sound recognition: In this paper, we introduce a novel attentional similarity module for the problem of few-shot sound recognition. Given a few examples of an unseen sound event, a classifier must be quickly adapted to recognize the new sound event without much fine-tuning. The proposed attentional similarity module can be plugged into any metric-based learning method for few-shot learning, allowing the resulting model to especially match related short sound events. Extensive experiments on two datasets shows that the proposed module consistently improves the performance of five different metric-based learning methods for few-shot sound recognition. The relative improvement ranges from +4.1% to +7.7% for 5-shot 5-way accuracy for the ESC-50 dataset, and from +2.1% to +6.5% for noiseESC-50. Qualitative results demonstrate that our method contributes in particular to the recognition of transient sound events.) <|cite_end|>.
In <|cite_start|> (Reference: Few-shot sound event detection: Locating perceptually similar sound events within a continuous recording is a common task for various audio applications. However, current tools require users to manually listen to and label all the locations of the sound events of interest, which is tedious and time-consuming. In this work, we (1) adapt state-of-the-art metric-based few-shot learning methods to automate the detection of similar-sounding events, requiring only one or few examples of the target event, (2) develop a method to automatically construct a partial set of labeled examples (negative samples) to reduce user labeling effort, and (3) develop an inference-time data augmentation method to increase detection accuracy. To validate our approach, we perform extensive comparative analysis of few-shot learning methods for the task of keyword detection in speech. We show that our approach successfully adapts closed-set few-shot learning approaches to an open-set sound event detection problem.) <|cite_end|> and, prototypical networks were used to be an effective few-shot SEC method.
Yang \textit{et al.} propose using mutual information loss to improve the class prototypes and feature extractor <|cite_start|> (Reference: A Mutual learning framework for Few-shot Sound Event Detection: Although prototypical network (ProtoNet) has proved to be an effective method for few-shot sound event detection, two problems still exist. Firstly, the small-scaled support set is insufficient so that the class prototypes may not represent the class center accurately. Secondly, the feature extractor is task-agnostic (or class-agnostic): the feature extractor is trained with base-class data and directly applied to unseen-class data. To address these issues, we present a novel mutual learning framework with transductive learning, which aims at iteratively updating the class prototypes and feature extractor. More specifically, we propose to update class prototypes with transductive inference to make the class prototypes as close to the true class center as possible. To make the feature extractor to be task-specific, we propose to use the updated class prototypes to fine-tune the feature extractor. After that, a fine-tuned feature extractor further helps produce better class prototypes. Our method achieves the F-score of 38.4$\%$ on the DCASE 2021 Task 5 evaluation set, which won the first place in the few-shot bioacoustic event detection task of Detection and Classification of Acoustic Scenes and Events (DCASE) 2021 Challenge.) <|cite_end|>.
Although these few-shot SEC methods can achieve acceptable performance with only a few examples, they have only led to a classification process for predefined multiple classes.
When encountering arbitrary sounds that might be unseen during training, the above-mentioned few-shot SEC methods have only rather limited classification capabilities.
Zero-shot learning (ZSL) extends the idea of few-shot classification by assuming that the labels we wish to predict at testing do not have available training data <|cite_start|> (Reference: {Few-Shot and Zero-Shot Multi-Label Learning for Structured Label Spaces: Large multi-label datasets contain labels that occur thousands of times (frequent group), those that occur only a few times (few-shot group), and labels that never appear in the training dataset (zero-shot group). Multi-label few- and zero-shot label prediction is mostly unexplored on datasets with large label spaces, especially for text classification. In this paper, we perform a fine-grained evaluation to understand how state-of-the-art methods perform on infrequent labels. Furthermore, we develop few- and zero-shot methods for multi-label text classification when there is a known structure over the label space, and evaluate them on two publicly available medical text datasets: MIMIC II and MIMIC III. For few-shot labels we achieve improvements of 6.2% and 4.8% in R@10 for MIMIC II and MIMIC III, respectively, over prior efforts; the corresponding R@10 improvements for zero-shot labels are 17.3% and 19%.) <|cite_end|>.
ZSL has been studied widely in the field of computer vision (CV), and various methods have been proposed <|cite_start|> (Reference: Latent Embeddings for Zero-shot Classification: We present a novel latent embedding model for learning a compatibility function between image and class embeddings, in the context of zero-shot classification. The proposed method augments the state-of-the-art bilinear compatibility model by incorporating latent variables. Instead of learning a single bilinear map, it learns a collection of maps with the selection, of which map to use, being a latent variable for the current image-class pair. We train the model with a ranking based objective function which penalizes incorrect rankings of the true class for a given image. We empirically demonstrate that our model improves the state-of-the-art for various class embeddings consistently on three challenging publicly available datasets for the zero-shot setting. Moreover, our method leads to visually highly interpretable results with clear clusters of different fine-grained object properties that correspond to different latent variable maps.) <|cite_end|> <|cite_start|> (Reference: Fine-Grained Generalized Zero-Shot Learning via Dense Attribute-Based Attention: We address the problem of fine-grained generalized zero-shot recognition of visually similar classes without training images for some classes. We propose a dense attribute-based attention mechanism that for each attribute focuses on the most relevant image regions, obtaining attribute-based features. Instead of aligning a global feature vector of an image with its associated class semantic vector, we propose an attribute embedding technique that aligns each attribute-based feature with its attribute semantic vector. Hence, we compute a vector of attribute scores, for the presence of each attribute in an image, whose similarity with the true class semantic vector is maximized. Moreover, we adjust each attribute score using an attention mechanism over attributes to better capture the discriminative power of different attributes. To tackle the challenge of bias towards seen classes during testing, we propose a new self-calibration loss that adjusts the probability of unseen classes to account for the training bias. We conduct experiments on three popular datasets of CUB, SUN and AWA2 as well as the large-scale DeepFashion dataset, showing that our model significantly improves the state of the art.) <|cite_end|> <|cite_start|> (Reference: Gradient Matching Generative Networks for Zero-Shot
Learning: Zero-shot learning (ZSL) is one of the most promising problems where substantial progress can potentially be achieved through unsupervised learning, due to distributional differences between supervised and zero-shot classes. For this reason, several works investigate the incorporation of discriminative domain adaptation techniques into ZSL, which, however, lead to modest improvements in ZSL accuracy. In contrast, we propose a generative model that can naturally learn from unsupervised examples, and synthesize training examples for unseen classes purely based on their class embeddings, and therefore, reduce the zero-shot learning problem into a supervised classification task. The proposed approach consists of two important components: (i) a conditional Generative Adversarial Network that learns to produce samples that mimic the characteristics of unsupervised data examples, and (ii) the Gradient Matching (GM) loss that measures the quality of the gradient signal obtained from the synthesized examples. Using our GM loss formulation, we enforce the generator to produce examples from which accurate classifiers can be trained. Experimental results on several ZSL benchmark datasets show that our approach leads to significant improvements over the state of the art in generalized zero-shot classification.) <|cite_end|> <|cite_start|> (Reference: Contrastive Embedding for Generalized Zero-Shot Learning: Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes, when only the labeled examples from seen classes are provided. Recent feature generation methods learn a generative model that can synthesize the missing visual features of unseen classes to mitigate the data-imbalance problem in GZSL. However, the original visual feature space is suboptimal for GZSL classification since it lacks discriminative information. To tackle this issue, we propose to integrate the generation model with the embedding model, yielding a hybrid GZSL framework. The hybrid GZSL approach maps both the real and the synthetic samples produced by the generation model into an embedding space, where we perform the final GZSL classification. Specifically, we propose a contrastive embedding (CE) for our hybrid GZSL framework. The proposed contrastive embedding can leverage not only the class-wise supervision but also the instance-wise supervision, where the latter is usually neglected by existing GZSL researches. We evaluate our proposed hybrid GZSL framework with contrastive embedding, named CE-GZSL, on five benchmark datasets. The results show that our CEGZSL method can outperform the state-of-the-arts by a significant margin on three datasets. Our codes are available on https://github.com/Hanzy1996/CE-GZSL.) <|cite_end|> since the work carried out by Lampert \textit{et al.}.
One of the representative approaches of ZSL in CV uses visual attributes <|cite_start|> (Reference: {Attribute-based classification for zero-shot visual object categorization: —We study the problem of object recognition for categories for which we have no training examples, a task also called zero-data or zero-shot learning. This situation has hardly been studied in computer vision research, even though it occurs frequently: the world contains tens of thousands of different object classes and for only few of them image collections have been formed and suitably annotated. To tackle the problem we introduce attribute-based classification: objects are identified based on a high-level description that is phrased in terms of semantic attributes, such as the object's color or shape. Because the identification of each such property transcends the specific learning task at hand, the attribute classifiers can be pre-learned independently, e.g. from existing image datasets unrelated to the current task. Afterwards, new classes can be detected based on their attribute representation, without the need for a new training phase. In this paper we also introduce a new dataset, Animals with Attributes, of over 30,000 images of 50 animal classes, annotated with 85 semantic attributes. Extensive experiments on this and two more datasets show that attribute-based classification indeed is able to categorize images without access to any training images of the target classes.) <|cite_end|> <|cite_start|> (Reference: Zero-Shot Learning via Visual Abstraction: ) <|cite_end|>, which describe the appearance of the class (e.g., horse shape, black and white, stripe, etc. for class ``zebra'') to classify images in the visual attribute space.
In this way, even if a class has no training image data, this approach can identify the class through its attribute information.
In contrast with many studies carried out on ZSL in CV, there has been little research carried out on ZSL for SEC.
Previous works <|cite_start|> (Reference: Zero-Shot Audio Classification via Semantic Embeddings: In this paper, we study zero-shot learning in audio classification via semantic embeddings extracted from textual labels and sentence descriptions of sound classes. Our goal is to obtain a classifier that is capable of recognizing audio instances of sound classes that have no available training samples, but only semantic side information. We employ a bilinear compatibility framework to learn an acoustic-semantic projection between intermediate-level representations of audio instances and sound classes, i.e., acoustic embeddings and semantic embeddings. We use VGGish to extract deep acoustic embeddings from audio clips, and pre-trained language models (Word2Vec, GloVe, BERT) to generate either label embeddings from textual labels or sentence embeddings from sentence descriptions of sound classes. Audio classification is performed by a linear compatibility function that measures how compatible an acoustic embedding and a semantic embedding are. We evaluate the proposed method on a small balanced dataset ESC-50 and a large-scale unbalanced audio subset of AudioSet. The experimental results show that classification performance is significantly improved by involving sound classes that are semantically close to the test classes in training. Meanwhile, we demonstrate that both label embeddings and sentence embeddings are useful for zero-shot learning. Classification performance is improved by concatenating label/sentence embeddings generated with different language models. With their hybrid concatenations, the results are improved further.) <|cite_end|> <|cite_start|> (Reference: Zero-shot audio classification with factored linear and nonlinear acoustic-semantic projections: In this paper, we study zero-shot learning in audio classification through factored linear and nonlinear acoustic-semantic projections between audio instances and sound classes. Zero-shot learning in audio classification refers to classification problems that aim at recognizing audio instances of sound classes, which have no available training data but only semantic side information. In this paper, we address zero-shot learning by employing factored linear and nonlinear acoustic-semantic projections. We develop factored linear projections by applying rank decomposition to a bilinear model, and use nonlinear activation functions, such as tanh, to model the non-linearity between acoustic embeddings and semantic embeddings. Compared with the prior bilinear model, experimental results show that the proposed projection methods are effective for improving classification performance of zero-shot learning in audio classification.) <|cite_end|> propose methods of zero-shot SEC (ZS-SEC) using semantic embeddings.
In those methods, a deep neural network (DNN) infers the semantic embedding representation from an input sound to classify the sound events in semantic space, and a word embedding generated by Word2Vec <|cite_start|> (Reference: Distributed Representations of Words and Phrases and their Compositionality: The recently introduced continuous Skip-gram model is an efficient method for learning high-quality distributed vector representations that capture a large number of precise syntactic and semantic word relationships. In this paper we present several extensions that improve both the quality of the vectors and the training speed. By subsampling of the frequent words we obtain significant speedup and also learn more regular word representations. We also describe a simple alternative to the hierarchical softmax called negative sampling. An inherent limitation of word representations is their indifference to word order and their inability to represent idiomatic phrases. For example, the meanings of "Canada" and "Air" cannot be easily combined to obtain "Air Canada". Motivated by this example, we present a simple method for finding phrases in text, and show that learning good vector representations for millions of phrases is possible.) <|cite_end|> from a class (event) label is used as a semantic embedding.
However, representing class information by word embeddings is considered inadequate for SEC because even though it reflects the semantic information of each word, it also contains much irrelevant information for the sound of each class.
In our previous work <|cite_start|> (Reference: Binary Attribute Embeddings for Zero-Shot Sound Event Classification: In this paper, we introduce a zero-shot learning method for sound event classification. The proposed method uses a semantic embedding of each sound event class and measures the compatibility between the semantic embedding and the input audio feature embedding. For semantic embedding, we newly define attribute vector that explains several attribute information of a sound event class, such as pitch, length, material of the sound source, etc. In the experiments, the proposed method showed higher accuracy than a conventional method using word embedding as the semantic embedding.) <|cite_end|>, therefore, we proposed a sound attribute vector (SAV) that can directly describe the sound of the class as the visual attribute mentioned above describes the appearance of the class.
The use of the SAV showed higher ZS-SEC accuracy than the use of word embeddings; however, the accuracy of classifying unseen events was still far behind that for classifying seen events.
In this paper, we propose a new ZS-SEC method that can learn discriminative global features and local features simultaneously to enhance the SAV-based ZS-SEC.
In the proposed method, the SAV is inferred from an encoded input sound using two modules: a base module and a prototype module.
The base module learns the discriminative global features of the input spectrogram to discriminate the attributes of each event class,
and the prototype module learns the spectro-temporal local features to regress the attributes of the target class.
In this way, the proposed method is expected to be able to enhance abilities of both discriminating event classes and inferring the attribute vector.
In addition, our proposed method can visualize the region in the spectrogram related to each attribute by calculating the similarity scores of the local features.
We confirm the effectiveness of the proposed method by comparing it to our previous SAV-based ZS-SEC method. <|paper_end|> | [
"<|reference_start|> Learning to match transient sound events using attentional similarity for few-shot sound recognition: In this paper, we introduce a novel attentional similarity module for the problem of few-shot sound recognition. Given a few examples of an unseen sound event, a classifier must be quickly adapted to recognize the new sound event without much fine-tuning. The proposed attentional similarity module can be plugged into any metric-based learning method for few-shot learning, allowing the resulting model to especially match related short sound events. Extensive experiments on two datasets shows that the proposed module consistently improves the performance of five different metric-based learning methods for few-shot sound recognition. The relative improvement ranges from +4.1% to +7.7% for 5-shot 5-way accuracy for the ESC-50 dataset, and from +2.1% to +6.5% for noiseESC-50. Qualitative results demonstrate that our method contributes in particular to the recognition of transient sound events. <|reference_end|>",
"<|reference_start|> {Few-Shot and Zero-Shot Multi-Label Learning for Structured Label Spaces: Large multi-label datasets contain labels that occur thousands of times (frequent group), those that occur only a few times (few-shot group), and labels that never appear in the training dataset (zero-shot group). Multi-label few- and zero-shot label prediction is mostly unexplored on datasets with large label spaces, especially for text classification. In this paper, we perform a fine-grained evaluation to understand how state-of-the-art methods perform on infrequent labels. Furthermore, we develop few- and zero-shot methods for multi-label text classification when there is a known structure over the label space, and evaluate them on two publicly available medical text datasets: MIMIC II and MIMIC III. For few-shot labels we achieve improvements of 6.2% and 4.8% in R@10 for MIMIC II and MIMIC III, respectively, over prior efforts; the corresponding R@10 improvements for zero-shot labels are 17.3% and 19%. <|reference_end|>",
"<|reference_start|> Fine-Grained Generalized Zero-Shot Learning via Dense Attribute-Based Attention: We address the problem of fine-grained generalized zero-shot recognition of visually similar classes without training images for some classes. We propose a dense attribute-based attention mechanism that for each attribute focuses on the most relevant image regions, obtaining attribute-based features. Instead of aligning a global feature vector of an image with its associated class semantic vector, we propose an attribute embedding technique that aligns each attribute-based feature with its attribute semantic vector. Hence, we compute a vector of attribute scores, for the presence of each attribute in an image, whose similarity with the true class semantic vector is maximized. Moreover, we adjust each attribute score using an attention mechanism over attributes to better capture the discriminative power of different attributes. To tackle the challenge of bias towards seen classes during testing, we propose a new self-calibration loss that adjusts the probability of unseen classes to account for the training bias. We conduct experiments on three popular datasets of CUB, SUN and AWA2 as well as the large-scale DeepFashion dataset, showing that our model significantly improves the state of the art. <|reference_end|>",
"<|reference_start|> Contrastive Embedding for Generalized Zero-Shot Learning: Generalized zero-shot learning (GZSL) aims to recognize objects from both seen and unseen classes, when only the labeled examples from seen classes are provided. Recent feature generation methods learn a generative model that can synthesize the missing visual features of unseen classes to mitigate the data-imbalance problem in GZSL. However, the original visual feature space is suboptimal for GZSL classification since it lacks discriminative information. To tackle this issue, we propose to integrate the generation model with the embedding model, yielding a hybrid GZSL framework. The hybrid GZSL approach maps both the real and the synthetic samples produced by the generation model into an embedding space, where we perform the final GZSL classification. Specifically, we propose a contrastive embedding (CE) for our hybrid GZSL framework. The proposed contrastive embedding can leverage not only the class-wise supervision but also the instance-wise supervision, where the latter is usually neglected by existing GZSL researches. We evaluate our proposed hybrid GZSL framework with contrastive embedding, named CE-GZSL, on five benchmark datasets. The results show that our CEGZSL method can outperform the state-of-the-arts by a significant margin on three datasets. Our codes are available on https://github.com/Hanzy1996/CE-GZSL. <|reference_end|>"
] | [
8,
14,
16,
18
] | {"<|multi_cite_1_1|>": "ss-925851", "<|multi_cite_1_3|>": "ss-925852", "<|multi_cite_2_1|>": "ss-925853", "<|multi_cite_2_2|>": "ss-1851233", "<|multi_cite_2_3|>": "arxiv-269706", "<|multi_cite_2_4|>": "ss-815198", "<|multi_cite_3_2|>": "ss-925854", "<|multi_cite_3_3|>": "arxiv-261432", "<|multi_cite_4_1|>": "arxiv-183124", "<|multi_cite_4_2|>": "ss-2543045", "<|multi_cite_4_4|>": "arxiv-372756", "<|cite_5|>": "arxiv-183124", "<|cite_6|>": "ss-2543045", "<|cite_8|>": "arxiv-372756", "<|cite_9|>": "ss-1242831", "<|multi_cite_10_2|>": "arxiv-94856", "<|multi_cite_10_3|>": "ss-1264660", "<|multi_cite_10_4|>": "ss-711069", "<|multi_cite_10_5|>": "arxiv-330926", "<|multi_cite_12_1|>": "ss-836089", "<|multi_cite_12_2|>": "ss-2273594", "<|multi_cite_13_1|>": "ss-980875", "<|multi_cite_13_2|>": "ss-870989", "<|cite_14|>": "arxiv-51600", "<|cite_15|>": "ss-925855"} |
2111.05530-0 | <|paper_start|> Title: Nearly Optimal Linear Convergence of Stochastic Primal-Dual Methods for Linear Programming
Abstract: Nearly Optimal Linear Convergence of Stochastic Primal-Dual Methods for Linear Programming: There is a recent interest on first-order methods for linear programming (LP). In this paper,we propose a stochastic algorithm using variance reduction and restarts for solving sharp primal-dual problems such as LP. We show that the proposed stochastic method exhibits a linear convergence rate for solving sharp instances with a high probability. In addition, we propose an efficient coordinate-based stochastic oracle for unconstrained bilinear problems, which has $\mathcal O(1)$ per iteration cost and improves the complexity of the existing deterministic and stochastic algorithms. Finally, we show that the obtained linear convergence rate is nearly optimal (upto $\log$ terms) for a wide class of stochastic primal dual methods.
Introduction
Linear programming (LP), as one of the most fundamental tools in operations research and computer science, has been extensively studied in both academia and industry since 1940s.
The applications of LP span various fields, including pricing and revenue management, transportation, network flow, scheduling, and many others <|cite_start|> (Reference: The stepping stone method of explaining linear programming calculations in transportation problems: Recent scientific research has made available a variety of tools which can be brought to bear on managerial problems. One of these tools, which has already proved its worth in practical applications, is linear programming. It is not the purpose of this article to explore the full ramifications of these research developments or even to explore the general field of linear programming. Only transportation-type problems and models will be discussed in any detail.) <|cite_end|> <|cite_start|> (Reference: A linear programming approach to production and employment scheduling: The problem of production and employment scheduling may be stated as follows. Given the monthly demands for the product turned out by a factory, what should be the monthly production rates and work force levels in order to minimize the total cost of regular payroll and overtime, hiring and layoffs, inventory and shortages incurred during a given planning interval of several months? This problem has received a classical solution in two papers by Holt, Modigliani, Muth, and Simon (Holt, C. C., F. Modigliani, H. A. Simon. 1955. A linear decision rule for production and employment scheduling. Management Sci. (October); Holt, C. C., F. Modigliani, J. F. Muth. 1956. Derivation of a linear decision rule for production and employment. Management Sci. (January).). These authors assumed quadratic cost functions. Their treatment of the problem will be referred to as "quadratic programming." It appears, however, that in the majority of practical applications and theoretical models the cost functions are assumed to be linear. It, therefore, seems desirable to have a method of solution for the linear case as well. In this paper it is shown that a solution can be obtained by linear programming methods. From the linear programming viewpoint, this paper is of an expository nature. "Management Technology", ISSN 0542-4917, was published as a separate journal from 1960 to 1964. In 1965 it was merged into Management Science.) <|cite_end|> <|cite_start|> (Reference: Production scheduling by the transportation method of linear programming: With fluctuating sales, a manufacturer must have fluctuating production, or fluctuating inventory, or both. Penalties are associated with either type of fluctuation. Several papers place this problem into a conventional linear-programming framework. This paper suggests that the same problem may be placed into a transportation-method framework and, further, that many transportation problems may be extended to include multiple time periods where this is meaningful. A generalized scheduling problem is placed here into the standard form of the transportation table.) <|cite_end|> <|cite_start|> (Reference: Linear Programming and Sequential Decisions: Using an illustration drawn from the area of inventory control, this paper demonstrates how a typical sequential probabilistic model may be formulated in terms of a an initial decision rule and b a Markov process, and then optimized by means of linear programming. This linear programming technique may turn out to be an efficient alternative to the functional equation approach in the numerical analysis of such problems. Regardless of computational significance, however, it is of interest that there should be such a close relationship between the two traditionally distinct areas of dynamic programming and linear programming.) <|cite_end|> <|cite_start|> (Reference: On the choice-based linear programming model for network revenue management: Gallego et al. [Gallego, G., G. Iyengar, R. Phillips, A. Dubey. 2004. Managing flexible products on a network. CORC Technical Report TR-2004-01, Department of Industrial Engineering and Operations Research, Columbia University, New York.] recently proposed a choice-based deterministic linear programming model (CDLP) for network revenue management (RM) that parallels the widely used deterministic linear programming (DLP) model. While they focused on analyzing “flexible products”---a situation in which the provider has the flexibility of using a collection of products (e.g., different flight times and/or itineraries) to serve the same market demand (e.g., an origin-destination connection)---their approach has broader implications for understanding choice-based RM on a network. In this paper, we explore the implications in detail. Specifically, we characterize optimal offer sets (sets of available network products) by extending to the network case a notion of “efficiency” developed by Talluri and van Ryzin [Talluri, K. T., G. J. van Ryzin. 2004. Revenue management under a general discrete choice model of consumer behavior. Management Sci.50 15--33.] for the single-leg, choice-based RM problem. We show that, asymptotically, as demand and capacity are scaled up, only these efficient sets are used in an optimal policy. This analysis suggests that efficiency is a potentially useful approach for identifying “good” offer sets on networks, as it is in the case of single-leg problems. Second, we propose a practical decomposition heuristic for converting the static CDLP solution into a dynamic control policy. The heuristic is quite similar to the familiar displacement-adjusted virtual nesting (DAVN) approximation used in traditional network RM, and it significantly improves on the performance of the static LP solution. We illustrate the heuristic on several numerical examples.) <|cite_end|>.
The dominant solvers of LP are essentially based on either the simplex method or interior-point method, which are very mature nowadays and can usually provide reliable solutions to LP. However, the success of both methods heavily relies on efficient solving linear systems using factorization, which has the following major drawbacks: (i) the factorization may run out of memory even though the original LP can fit in memory; (ii) it is highly challenging to take advantage of modern computing resources, such as distributed system and GPUs when solving linear systems.
Recently, it was shown that first-order methods (FOMs) with proper enhancements can identify high quality solutions to LP problem quickly <|cite_start|> (Reference: Practical Large-Scale Linear Programming using Primal-Dual Hybrid
Gradient: We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications. PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized by Chambolle and Pock (2011), to a saddle-point formulation of LP. PDLP enhances PDHG for LP by combining several new techniques with older tricks from the literature; the enhancements include diagonal preconditioning, presolving, adaptive step sizes, and adaptive restarting. PDLP improves the state of the art for first-order methods applied to LP. We compare PDLP with SCS, an ADMM-based solver, on a set of 383 LP instances derived from MIPLIB 2017. With a target of $10^{-8}$ relative accuracy and 1 hour time limit, PDLP achieves a 6.3x reduction in the geometric mean of solve times and a 4.6x reduction in the number of instances unsolved (from 227 to 49). Furthermore, we highlight standard benchmark instances and a large-scale application (PageRank) where our open-source prototype of PDLP, written in Julia, outperforms a commercial LP solver.) <|cite_end|>, which provides an alternative to tranditional simplex and barrier methods for LP. FOMs utilize only the gradient information of the objective function to update the iterates and are well used in many areas of optimization (in contrast to second-order methods where Hessian is used). The basic operation in FOMs is matrix-vector multiplication, which can avoid the drawbacks mentioned above.
In this work, we further push this line of research and study stochastic FOMs for LP. In contrast to deterministic first-order methods, the basic operation per iteration in stochastic FOMs is a vector operation. Such operation is very cheap in general and is a common practice for modern machine learning applications. As a drawback, the number of iterations of stochastic algorithms to obtain an approximate solution is typically higher than its deterministic counterpart.
Due to such a fundamental distinction, Nesterov <|cite_start|> (Reference: Gradient methods for minimizing composite functions: ) <|cite_end|>formally defines methods that require matrix factorization (or matrix inversion) as handling \emph{medium-scale} problems, methods that require matrix vector multiplication as handling \emph{large-scale} problems, and methods that require vector operations as handling \emph{huge-scale} problems. Based on this definition, in the context of LP, the simplex method and interior point method are classified as handling medium-scale problems (this definition perhaps belie the practical efficiency of the two methods, but it is a fact that it is challenging to further scale up them as mentioned above), deterministic FOMs <|cite_start|> (Reference: Practical Large-Scale Linear Programming using Primal-Dual Hybrid
Gradient: We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications. PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized by Chambolle and Pock (2011), to a saddle-point formulation of LP. PDLP enhances PDHG for LP by combining several new techniques with older tricks from the literature; the enhancements include diagonal preconditioning, presolving, adaptive step sizes, and adaptive restarting. PDLP improves the state of the art for first-order methods applied to LP. We compare PDLP with SCS, an ADMM-based solver, on a set of 383 LP instances derived from MIPLIB 2017. With a target of $10^{-8}$ relative accuracy and 1 hour time limit, PDLP achieves a 6.3x reduction in the geometric mean of solve times and a 4.6x reduction in the number of instances unsolved (from 227 to 49). Furthermore, we highlight standard benchmark instances and a large-scale application (PageRank) where our open-source prototype of PDLP, written in Julia, outperforms a commercial LP solver.) <|cite_end|> <|cite_start|> (Reference: An admm-based interior-point method for large-scale linear programming: ABSTRACT In this paper, we propose a new framework to implement interior point method (IPM) in order to solve some very large-scale linear programs (LPs). Traditional IPMs typically use Newton's method to approximately solve a subproblem that aims to minimize a log-barrier penalty function at each iteration. Due its connection to Newton's method, IPM is often classified as second-order method – a genre that is attached with stability and accuracy at the expense of scalability. Indeed, computing a Newton step amounts to solving a large system of linear equations, which can be efficiently implemented if the input data are reasonably sized and/or sparse and/or well-structured. However, in case the above premises fail, then the challenge still stands on the way for a traditional IPM. To deal with this challenge, one approach is to apply the iterative procedure, such as preconditioned conjugate gradient method, to solve the system of linear equations. Since the linear system is different in each iteration, it is difficult to find good pre-conditioner to achieve the overall solution efficiency. In this paper, an alternative approach is proposed. Instead of applying Newton's method, we resort to the alternating direction method of multipliers (ADMM) to approximately minimize the log-barrier penalty function at each iteration, under the framework of primal–dual path-following for a homogeneous self-dual embedded LP model. The resulting algorithm is an ADMM-Based Interior Point Method, abbreviated as ABIP in this paper. The new method inherits stability from IPM and scalability from ADMM. Because of its self-dual embedding structure, ABIP is set to solve any LP without requiring prior knowledge about its feasibility. We conduct extensive numerical experiments testing ABIP with large-scale LPs from NETLIB and machine learning applications. The results demonstrate that ABIP compares favourably with other LP solvers including SDPT3, MOSEK, DSDP-CG and SCS.) <|cite_end|>are classified as handling large-scale problems, and in the paper we instead look at huge-scale problems.
We consider a general class of primal-dual problems with the form
\begin{equation}\label{eq:poi}
\min_{x}\max_{y} \mathcal{L}(x,y):= \Phi(x,y) + g_1(x)-g_2(y),
\end{equation}
where $\mathcal L(x,y)$ is convex in $x$ and concave in $y$, $g_1(x)$ is a simple convex function in $x$ and $g_2(y)$ is a simple convex function in $y$. In particular, the primal-dual formulation of standard form LP is
\begin{equation}\label{eq:lp}
\min_{x\ge 0}\max_{y} y^T Ax + c^Tx-b^Ty \ ,
\end{equation}
and a highly related problem is the unconstrained bilinear problem
\begin{equation}\label{eq:bilinear}
\min_{x}\max_{y} y^T Ax + c^Tx-b^Ty \ .
\end{equation}
Furthermore, we define $z=(x,y)$, $F(z)=[\nabla_x\Phi(x,y), -\nabla_y\Phi(x,y)]$ and $g(z)=g_1(x)+g_2(y)$ for notational convenience. We here assume there is an unbiased stochastic oracle $F_{\xi}(z)$ such that $\mathbb E[F_{\xi}(z)]= F(z)$ (see Section \ref{sec:app} for examples on how to construct the stochastic oracles).
To efficiently solve \eqref{eq:lp}, our key ideas are variance reduction and restarts.
Variance reduction is a very successful technique for finite sum stochastic minimization problems <|cite_start|> (Reference: Accelerating Stochastic Gradient Descent using
Predictive Variance Reduction: Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.) <|cite_end|> <|cite_start|> (Reference: SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives: In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.) <|cite_end|> <|cite_start|> (Reference: Minimizing Finite Sums with the Stochastic Average Gradient: We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from O(1/k^{1/2}) to O(1/k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1/k) to a linear convergence rate of the form O(p^k) for p \textless{} 1. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.) <|cite_end|>. The basic idea is to reduce the variance in the stochastic gradient estimation by comparing it with the snapshots of the true gradient. Variance reduction can usually improve the convergence property of stochastic minimization algorithms <|cite_start|> (Reference: Accelerating Stochastic Gradient Descent using
Predictive Variance Reduction: Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.) <|cite_end|> <|cite_start|> (Reference: SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives: In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.) <|cite_end|> <|cite_start|> (Reference: Minimizing Finite Sums with the Stochastic Average Gradient: We propose the stochastic average gradient (SAG) method for optimizing the sum of a finite number of smooth convex functions. Like stochastic gradient (SG) methods, the SAG method's iteration cost is independent of the number of terms in the sum. However, by incorporating a memory of previous gradient values the SAG method achieves a faster convergence rate than black-box SG methods. The convergence rate is improved from O(1/k^{1/2}) to O(1/k) in general, and when the sum is strongly-convex the convergence rate is improved from the sub-linear O(1/k) to a linear convergence rate of the form O(p^k) for p \textless{} 1. Further, in many cases the convergence rate of the new method is also faster than black-box deterministic gradient methods, in terms of the number of gradient evaluations. Numerical experiments indicate that the new algorithm often dramatically outperforms existing SG and deterministic gradient methods, and that the performance may be further improved through the use of non-uniform sampling strategies.) <|cite_end|>. Recently, <|cite_start|> (Reference: Stochastic Variance Reduction for Variational Inequality Methods: We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.) <|cite_end|>extends the variance reduction scheme to EGM for solving monotonic variational inequality (with some ambiguity, we call this algorithm sEGM), which was shown to have $O(1/\epsilon)$ convergence rate.
On the other hand, restart is another standard technique in minimization problems, which is used to speed up the convergence of multiple deterministic algorithms <|cite_start|> (Reference: Adaptive Restart for Accelerated Gradient Schemes: ) <|cite_end|> <|cite_start|> (Reference: New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure: ) <|cite_end|> <|cite_start|> (Reference: Restarting Algorithms: Sometimes There Is Free Lunch: ) <|cite_end|>. Recently, <|cite_start|> (Reference: Faster first-order primal-dual methods for linear programming using restarts and sharpness: ) <|cite_end|>introduces the sharpness of primal-dual problems, and further presents a simple restart scheme to accelerate the linear convergence rate of primal-dual algorithms.
This paper extends the restarted algorithm in <|cite_start|> (Reference: Faster first-order primal-dual methods for linear programming using restarts and sharpness: ) <|cite_end|>to stochastic algorithms for solving sharp primal-dual problems, such as, LP. It turns out that while LP is sharp on any bounded region <|cite_start|> (Reference: Faster first-order primal-dual methods for linear programming using restarts and sharpness: ) <|cite_end|>, LP is not globally sharp (see Appendix~\ref{sec:sharp-subreg} and Appendix~\ref{sec:lp_not_global_sharp} for a counter example). A fundamental difficulty of restarted stochastic algorithm is that, unlike deterministic algorithms, the iterates may escape from any bounded region, thus there is no guarantee that local sharpness can be helpful for the convergence of stochastic algorithms. We overcome this issue by presenting a high-probability argument and show the linear convergence of the proposed restarted algorithms with high probability.
The performance of randomized algorithms depends on the realization of stochasticity. The traditional convergence analysis of these algorithms usually measures the expected performance, namely, running the algorithm multiple times and looking at the average performance. In contrast, we present high probability results, namely, running the algorithm multiple times, and study the performance of all trajectories with a certain probability. In particular, when choosing the probability as $1/2$, we obtain the median performance of the stochastic trajectories, which can be viewed as an alternative to the expected performance studied in the literature.
\begin{table}
\centering
\small{
\begin{threeparttable}
\begin{tabular}{|c|c|c|c|}
\hline
\thead{Algorithm} & \thead{\makecell{Per-iteration\\ Cost}} & \thead{Number of Iteration to\\ find an $\epsilon$ solution} & \thead{Total Cost} \\ \hline
\makecell{Deterministic \\ <|cite_start|> (Reference: The extragradient method for finding saddle points and other problems: ) <|cite_end|>} & $\mathcal{O}\pran{\text{nnz}(A)}$ & $\mathcal{O}\pran{\frac{\Vert A \Vert_2}{\epsilon}}$ & $\mathcal{O}\pran{\text{nnz}(A)\frac{\Vert A \Vert_2}{\epsilon}}$ \\ \hline
\makecell{Deterministic \\ Restart <|cite_start|> (Reference: Faster first-order primal-dual methods for linear programming using restarts and sharpness: ) <|cite_end|>} & $\mathcal{O}\pran{\text{nnz}(A)}$ & $\mathcal{O}\pran{\kappa_2 \log{\frac{1}{\epsilon}}}$ & $\mathcal{O}\pran{\text{nnz}(A)\kappa_2 \log{\frac{1}{\epsilon}}}$ \\ \hline
\makecell{SPDHG \\ <|cite_start|> (Reference: Stochastic Primal-Dual Hybrid Gradient Algorithm with Arbitrary Sampling and Imaging Applications: We propose a stochastic extension of the primal-dual hybrid gradient algorithm studied by Chambolle and Pock in 2011 to solve saddle point problems that are separable in the dual variable. The analysis is carried out for general convex-concave saddle point problems and problems that are either partially smooth / strongly convex or fully smooth / strongly convex. We perform the analysis for arbitrary samplings of dual variables, and obtain known deterministic results as a special case. Several variants of our stochastic method significantly outperform the deterministic variant on a variety of imaging tasks.) <|cite_end|>} & $\mathcal{O}(m+n)$ & ${\mathcal{O}}\pran{\frac{\sum_i \Vert A_{i\cdot} \Vert_2}{\epsilon}}$ & ${\mathcal{O}}\pran{\text{nnz}(A)+(m+n)\frac{\sum_i \Vert A_{i\cdot} \Vert_2}{\epsilon}}$ \\ \hline
\makecell{SPDHG\footnotemark \\ <|cite_start|> (Reference: On the Convergence of Stochastic Primal-Dual Hybrid Gradient: In this paper, we analyze the recently proposed stochastic primal-dual hybrid gradient (SPDHG) algorithm and provide new theoretical results. In particular, we prove almost sure convergence of the iterates to a solution and linear convergence with standard step sizes, independent of strong convexity constants. Our assumption for linear convergence is metric subregularity, which is satisfied for smooth and strongly convex problems in addition to many nonsmooth and/or nonstrongly convex problems, such as linear programs, Lasso, and support vector machines. In the general convex case, we prove optimal sublinear rates for the ergodic sequence and for randomly selected iterate, without bounded domain assumptions. We also provide numerical evidence showing that SPDHG with standard step sizes shows favorable and robust practical performance against its specialized strongly convex variant SPDHG-$\mu$ and other state-of-the-art algorithms including variance reduction methods and stochastic dual coordinate ascent.) <|cite_end|>} & $\mathcal {O}(m+n)$ & ${\mathcal O}\pran{\kappa_F^2 \log \frac 1 \epsilon}$ & ${\mathcal O}\pran{(m+n)\kappa_F^2 \log \frac 1 \epsilon}$ \\ \hline
\makecell{Conceptual \\ Proximal <|cite_start|> (Reference: Variance Reduction for Matrix Games: We present a randomized primal-dual algorithm that solves the problem $\min_{x} \max_{y} y^\top A x$ to additive error $\epsilon$ in time $\mathrm{nnz}(A) + \sqrt{\mathrm{nnz}(A)n}/\epsilon$, for matrix $A$ with larger dimension $n$ and $\mathrm{nnz}(A)$ nonzero entries. This improves the best known exact gradient methods by a factor of $\sqrt{\mathrm{nnz}(A)/n}$ and is faster than fully stochastic gradient methods in the accurate and/or sparse regime $\epsilon \le \sqrt{n/\mathrm{nnz}(A)}$. Our results hold for $x,y$ in the simplex (matrix games, linear programming) and for $x$ in an $\ell_2$ ball and $y$ in the simplex (perceptron / SVM, minimum enclosing ball). Our algorithm combines Nemirovski's "conceptual prox-method" and a novel reduced-variance gradient estimator based on "sampling from the difference" between the current iterate and a reference point.) <|cite_end|>} & $\mathcal{O}(m+n)$ & $\mathcal{O}\pran{\sqrt{\frac{\text{nnz}(A)}{m+n}}\frac{\Vert A \Vert_F}{\epsilon}}$ & \makecell{$\mathcal{O}\pran{\text{nnz}(A)+\frac{\sqrt{\text{nnz}(A)(m+n)}\Vert A \Vert_F}{\epsilon}}$} \\ \hline
\makecell{sEGM \\ <|cite_start|> (Reference: Stochastic Variance Reduction for Variational Inequality Methods: We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.) <|cite_end|>} & $\mathcal{O}(m+n)$ & $\mathcal{O}\pran{\sqrt{\frac{\text{nnz}(A)}{m+n}}\frac{\Vert A \Vert_F}{\epsilon}}$ & \makecell{$\mathcal{O}\pran{\text{nnz}(A)+\frac{\sqrt{\text{nnz}(A)(m+n)}\Vert A \Vert_F}{\epsilon}}$} \\ \hline
\makecell{RsEGM\\ Oracle II \\ {(\bf This Paper)}} & $\mathcal{O}(m+n)$ & $\tilde{\mathcal{O}}\pran{\sqrt{\frac{\text{nnz}(A)}{m+n}}\kappa_F\text{polylog}(\frac 1 \epsilon)}$ & \makecell{$\tilde{\mathcal O}\pran{\text{nnz}(A) \log{\frac{1}{\epsilon}}+\sqrt{\text{nnz}(A)(m+n)}\kappa_F\text{polylog}(\frac 1 \epsilon)}$ } \\ \hline
\makecell{RsEGM\\ Oracle IV \\ {(\bf This Paper)}} & $\mathcal{O}(1)$ & $\tilde{\mathcal{O}}\pran{\sqrt{{\text{nnz}(A)}}\kappa_F\text{polylog}(\frac 1 \epsilon)}$ & \makecell{$\tilde{\mathcal O}\pran{\text{nnz}(A) \log{\frac{1}{\epsilon}}+\sqrt{\text{nnz}(A)}\kappa_F\text{polylog}(\frac 1 \epsilon)}$ \\ } \\ \hline
\end{tabular}
\caption{Comparison on Unconstrained Bilinear Problem \eqref{eq:bilinear}, where $\kappa_2=\frac{\Vert A \Vert_2}{\alpha}$, $\kappa_F=\frac{\Vert A \Vert_F}{\alpha}$.}
\label{tab:bilinear}
\end{threeparttable}}
\end{table}
\footnotetext{
The complexity of SPDHG <|cite_start|> (Reference: Stochastic Variance Reduction for Variational Inequality Methods: We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.) <|cite_end|>involves complicated terms. In the table, we present a lower bound on the number of iterations and the total cost, i.e., they need at least this number of iterations (or total cost) to find an $\epsilon$-accuracy solution based on their analysis (see Appendix \ref{sec:compare-spdhg} for more details).}
\begin{table}
\centering
{\small
\begin{tabular}{ |c|c|c|c| }
\hline
\thead{Algorithm} & \thead{\makecell{Per-iteration\\ Cost}} & \thead{Number of Iteration} & \thead{Total Cost} \\ \hline
\makecell{Deterministic \\ <|cite_start|> (Reference: The extragradient method for finding saddle points and other problems: ) <|cite_end|>} & $\mathcal{O}(\text{nnz}(A))$ & $\mathcal{O}\pran{\frac{\Vert A \Vert_2}{\epsilon}}$ & $\mathcal{O}\pran{\text{nnz}(A)\frac{\Vert A \Vert_2}{\epsilon}}$ \\ \hline
\makecell{Deterministic\\ Restart <|cite_start|> (Reference: Faster first-order primal-dual methods for linear programming using restarts and sharpness: ) <|cite_end|>} & $\mathcal{O}(\text{nnz}(A))$ & $\mathcal{O}\pran{\kappa_2\log{\frac{1}{\epsilon}}}$ & $\mathcal{O}\pran{\text{nnz}(A)\kappa_2\log{\frac{1}{\epsilon}}}$ \\ \hline
\makecell{sEGM \\ <|cite_start|> (Reference: Stochastic Variance Reduction for Variational Inequality Methods: We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.) <|cite_end|>} & $\mathcal{O}(m+n)$ & $\mathcal{O}\pran{\sqrt{\frac{\text{nnz}(A)}{m+n}}\frac{\Vert A \Vert_F}{\epsilon}}$ & \makecell{$\mathcal{O}\pran{\text{nnz}(A)+\frac{\sqrt{\text{nnz}(A)(m+n)}\Vert A \Vert_F}{\epsilon}}$} \\ \hline
\makecell{RsEGM \\ {(\bf This Paper)}} & $\mathcal{O}(m+n)$ & $\tilde{\mathcal{O}}\pran{\sqrt{\frac{\text{nnz}(A)}{m+n}}\kappa_F\text{polylog}(\frac 1 \epsilon)}$ & \makecell{$\tilde{\mathcal O}\pran{\text{nnz}(A)\log{\frac{1}{\epsilon}}+\sqrt{\text{nnz}(A)(m+n)}\kappa_F\text{polylog}(\frac 1 \epsilon)}$} \\ \hline
\end{tabular}}
\caption{Comparison on standard LP~\eqref{eq:lp}, where $\kappa_2=\frac{\Vert A \Vert_2}{1/[H(1+\Vert z^{0,0} \Vert+R_0)]}$, $\kappa_F=\frac{\Vert A \Vert_F}{1/[H(1+\Vert z^{0,0} \Vert+R_0)]}$, and $H$ is the Hoffman constant of the KKT system of the LP (see Example \ref{thm:lem-LP-sharp} for details).}
\label{tab:lp}
\end{table}
Table \ref{tab:bilinear} and Table \ref{tab:lp} present the per iteration cost, complexity of the algorithm, and the total flop counts for RsEGM and multiple deterministic and stochastic algorithms for solving unconstrained bilinear problem and LP, respectively. For unconstrained bilinear problems, deterministic algorithms require to compute the gradient of the objective, thus the per iteration cost is $O(\text{nnz}(A))$. Standard stochastic algorithms are based on row/column sampling, and the iteration cost is $O(m+n)$. We also present a stochastic coordinate scheme where the per-iteration cost is $O(1)$. While stochastic algorithms have low per-iteration cost, it usually requires more iterations to identify an $\epsilon$-close solution compared to its deterministic counterparts. As we see in Table \ref{tab:bilinear}, compared with the optimal deterministic algorithms <|cite_start|> (Reference: Faster first-order primal-dual methods for linear programming using restarts and sharpness: ) <|cite_end|>, the total flop count of RsEGM with stochastic Oracle II is better when the matrix $A$ is dense and of low rank. With the coordinate gradient estimator Oracle IV, the total cost of RsEGM is even lower and improves optimal deterministic algorithms by at least a factor of $\sqrt n$ when A is dense. For standard LP, as seen in Table \ref{tab:lp}, stochastic algorithms require more iterations to achieve $\epsilon$-accuracy than the unconstrained bilinear problem due to the existence of inequality constrains. Similar to the unconstrained bilinear setting, when matrix $A$ is low-rank and dense, the total flop cost of RsEGM improves the optimal deterministic algorithm. On the other hand, most of the previous works on stochastic algorithms have sublinear rate. The only exception is <|cite_start|> (Reference: Stochastic Variance Reduction for Variational Inequality Methods: We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.) <|cite_end|>, where the authors show the linear convergence of SPDHG for solving problems satisfying global metric sub-regularity. Indeed, unconstrained bilinear problems satisfy the global metric sub-regularity, while LP does not satisfy it globally. The complexity of SPDHG involves more complicated notations and we present a more detailed comparison in Appendix~\ref{sec:lp_not_global_sharp} and Appendix~\ref{sec:compare-spdhg}, but our proposed algorithms are at least better than SPDHG by a factor of condition number $\kappa_F$.
\subsection{Contributions}
In this paper, we propose a restarted stochastic extragradient method with variance reduction for solving sharp primal-dual problems. We show that the proposed algorithm exhibit linear convergence rate with high probability. To the best of our knowledge, this is the first stochastic algorithm with linear rate for the general standard-form LP problems~\eqref{eq:lp}. For unconstrained bilinear problems, our restarted scheme improves the complexity of existing linear convergence of stochastic algorithms <|cite_start|> (Reference: Stochastic Variance Reduction for Variational Inequality Methods: We propose stochastic variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions. Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman setups. All proposed methods converge in the same setting as their deterministic counterparts and they either match or improve the best-known complexities for solving structured min-max problems. Our results reinforce the correspondence between variance reduction in variational inequalities and minimization. We also illustrate the improvements of our approach with numerical evaluations on matrix games.) <|cite_end|>by a factor of the condition number.
\subsection{Assumptions}
Throughout the paper, we have two assumptions. The first one is on the problem \eqref{eq:poi}:
\begin{ass}\label{ass:problem}
The problem \eqref{eq:poi} satisfies:\\
(i) The optimal solution set $\mathcal Z^* \neq \varnothing$.\\
(ii) The function $g: \mathcal Z \rightarrow \mathbb R \cup {+\infty}$ is proper convex lower semi-continuous.\\
(iii) The function $\mathcal L(x,y)$ is convex in $x$ and concave in $y$.
\end{ass}
The second assumption is on the stochastic gradient oracle:
\begin{ass}\label{ass:oracal}
We assume there exists a stochastic oracle $F_\xi:\RR^{m+n}\rightarrow \RR^{m+n}$ such that
(i) it is unbiased: $\mathbb E[F_{\xi}(z)]=F(z)$;
(ii) it is $L$-Lipschitz (in expectation): $\mathbb E [\Vert F_{\xi}(u)-F_{\xi}(v) \Vert ^2]\leq L^2\Vert u-v \Vert ^2$.
\end{ass}
\subsection{Related Literature}
\textbf{Convex-concave primal-dual problems.}
There has been a long history of convex-concave primal-dual problems, and many of the early works study a more general problem, monotone variational inequalities. Rockafellar proposed a proximal point method (PPM) <|cite_start|> (Reference: Monotone operators and the proximal point algorithm: For the problem of minimizing a lower semicontinuous proper convex function f on a Hilbert space, the proximal point algorithm in exact form generates a sequence $\{ z^k \} $ by taking $z^{k + 1} $ to be the minimizes of $f(z) + ({1 / {2c_k }})\| {z - z^k } \|^2 $, where $c_k > 0$. This algorithm is of interest for several reasons, but especially because of its role in certain computational methods based on duality, such as the Hestenes-Powell method of multipliers in nonlinear programming. It is investigated here in a more general form where the requirement for exact minimization at each iteration is weakened, and the subdifferential $\partial f$ is replaced by an arbitrary maximal monotone operator T. Convergence is established under several criteria amenable to implementation. The rate of convergence is shown to be “typically” linear with an arbitrarily good modulus if $c_k $ stays large enough, in fact superlinear if $c_k \to \infty $. The case of $T = \partial f$ is treated in extra detail. Applicati...) <|cite_end|>for solving monotone variational inequalities. Around the same time, Korpelevich proposed the extragradient method (EGM) <|cite_start|> (Reference: The extragradient method for finding saddle points and other problems: ) <|cite_end|>for convex-concave primal-dual problems. After that, there have been numerous results on the convergence analysis of these methods. In particular, Tseng <|cite_start|> (Reference: On linear convergence of iterative methods for the variational inequality problem: ) <|cite_end|>shows that PPM and EGM have linear convergence for strongly-convex-strongly-concave primal-dual problems or for unconstrained bilinear problems. Nemirovski proposes Mirror Prox algorithm in the seminal work <|cite_start|> (Reference: Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems: We propose a prox-type method with efficiency estimate $O(\epsilon^{-1})$ for approximating saddle points of convex-concave C$^{1,1}$ functions and solutions of variational inequalities with monotone Lipschitz continuous operators. Application examples include matrix games, eigenvalue minimization, and computing the Lovasz capacity number of a graph, and these are illustrated by numerical experiments with large-scale matrix games and Lovasz capacity problems.) <|cite_end|>, which is a more general form of EGM, and shows that EGM has $\mathcal O(\frac 1 \epsilon)$ sublinear convergence rate for solving general convex-concave primal-dual problems over a bounded and compact set. <|cite_start|> (Reference: Prox-Method with Rate of Convergence O(1/t) for Variational Inequalities with Lipschitz Continuous Monotone Operators and Smooth Convex-Concave Saddle Point Problems: We propose a prox-type method with efficiency estimate $O(\epsilon^{-1})$ for approximating saddle points of convex-concave C$^{1,1}$ functions and solutions of variational inequalities with monotone Lipschitz continuous operators. Application examples include matrix games, eigenvalue minimization, and computing the Lovasz capacity number of a graph, and these are illustrated by numerical experiments with large-scale matrix games and Lovasz capacity problems.) <|cite_end|>also build up the connection between EGM and PPM: EGM is an approximation to PPM.
Another line of research is to study a special case of \eqref{eq:poi} where $\Phi(x,y)=y^TAx$ is a bilinear term. Two well-known algorithms are Douglas-Rachford splitting <|cite_start|> (Reference: On the numerical solution of heat conduction problems in two and three space variables: ) <|cite_end|> <|cite_start|> (Reference: On the {D: . Let G be a connected graph with α ∈ [0 , 1], the D α -spectral radius of G is defined to be the spectral radius of the matrix D α ( G ), defined as D α ( G ) = αT ( G )+(1 − α ) D ( G ), where T ( G ) is a transmission diagonal matrix of G and D ( G ) denotes the distance matrix of G . In this paper, we give some sharp upper and lower bounds for the D α -spectral radius with respect to different graph parameters.) <|cite_end|>(Alternating Direction Method of Multiplier (ADMM) as a special case) and Primal-dual Hybrid Gradient Method (PDHG) <|cite_start|> (Reference: A First-Order Primal-Dual Algorithm for Convex Problems with Applications to Imaging: ) <|cite_end|>.
Very recently, there is a renewed interest on primal-dual methods, motivated by machine learning applications. For bilinear problems $\mathcal L(x,y)=y^TAx$ with full rank matrix $A$, <|cite_start|> (Reference: Improved Techniques for Training GANs: We present a variety of new architectural features and training procedures that we apply to the generative adversarial networks (GANs) framework. We focus on two applications of GANs: semi-supervised learning, and the generation of images that humans find visually realistic. Unlike most work on generative models, our primary goal is not to train a model that assigns high likelihood to test data, nor do we require the model to be able to learn well without using any labels. Using our new techniques, we achieve state-of-the-art results in semi-supervised classification on MNIST, CIFAR-10 and SVHN. The generated images are of high quality as confirmed by a visual Turing test: our model generates MNIST samples that humans cannot distinguish from real data, and CIFAR-10 samples that yield a human error rate of 21.3%. We also present ImageNet samples with unprecedented resolution and show that our methods enable the model to learn recognizable features of ImageNet classes.) <|cite_end|>shows that the Optimistic Gradient Descent Ascent (OGDA) converges linearly and later on <|cite_start|> (Reference: A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach: In this paper we consider solving saddle point problems using two variants of Gradient Descent-Ascent algorithms, Extra-gradient (EG) and Optimistic Gradient Descent Ascent (OGDA) methods. We show that both of these algorithms admit a unified analysis as approximations of the classical proximal point method for solving saddle point problems. This viewpoint enables us to develop a new framework for analyzing EG and OGDA for bilinear and strongly convex-strongly concave settings. Moreover, we use the proximal point approximation interpretation to generalize the results for OGDA for a wide range of parameters.) <|cite_end|>shows that OGDA , EGM and PPM all enjoy a linear convergence rate. <|cite_start|> (Reference: A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach: In this paper we consider solving saddle point problems using two variants of Gradient Descent-Ascent algorithms, Extra-gradient (EG) and Optimistic Gradient Descent Ascent (OGDA) methods. We show that both of these algorithms admit a unified analysis as approximations of the classical proximal point method for solving saddle point problems. This viewpoint enables us to develop a new framework for analyzing EG and OGDA for bilinear and strongly convex-strongly concave settings. Moreover, we use the proximal point approximation interpretation to generalize the results for OGDA for a wide range of parameters.) <|cite_end|>also presents an interesting observation that OGDA approximates PPM on bilinear problems. Lu <|cite_start|> (Reference: {An ${O: This paper presents an O.) <|cite_end|>analyzes the dynamics of unconstrained primal-dual algorithms under an ODE framework and yields tight conditions under which different algorithms exhibit linear convergence. However, an important caveat is that not all linear convergence rates are equal. <|cite_start|> (Reference: Faster first-order primal-dual methods for linear programming using restarts and sharpness: ) <|cite_end|>shows that a simple restarted variant of these algorithms can improve the dependence of complexity on condition number in their linear convergence rate, as well as the empirical performance of the algorithms.
\textbf{Linear programming.}
Linear programming is a fundamental tool in operations research. Two dominating methods to solve LP problems are simplex method <|cite_start|> (Reference: Linear Programming and Extensions: In real-world problems related to finance, business, and management, mathematicians and economists frequently encounter optimization problems. In this classic book, George Dantzig looks at a wealth of examples and develops linear programming methods for their solutions. He begins by introducing the basic theory of linear inequalities and describes the powerful simplex method used to solve them. Treatments of the price concept, the transportation problem, and matrix methods are also given, and key mathematical concepts such as the properties of convex sets and linear vector spaces are covered."The author of this book was the main force in establishing a new mathematical discipline, and he has contributed to its further development at every stage and from every angle. This volume ... is a treasure trove for those who work in this field--teachers, students, and users alike. Its encyclopaedic coverage, due in part to collaboration with other experts, makes it an absolute must."--S. Vajda, Zentralblatt fYr Mathematik und ihre Grenzgebiete) <|cite_end|>and interior-point method <|cite_start|> (Reference: A New Polynomial-Time Algorithm for Linear Programming: We present a new polynomial-time algorithm for linear programming. In the worst case, the algorithm requiresO(n3.5L) arithmetic operations onO(L) bit numbers, wheren is the number of variables andL is the number of bits in the input. The running-time of this algorithm is better than the ellipsoid algorithm by a factor ofO(n2.5). We prove that given a polytopeP and a strictly interior point a εP, there is a projective transformation of the space that mapsP, a toP′, a′ having the following property. The ratio of the radius of the smallest sphere with center a′, containingP′ to the radius of the largest sphere with center a′ contained inP′ isO(n). The algorithm consists of repeated application of such projective transformations each followed by optimization over an inscribed sphere to create a sequence of points which converges to the optimal solution in polynomial time.) <|cite_end|>, and the commercial LP solvers based on these methods can provide reliable solutions even for fairly large instances. While the two methods are quite different, both require solving linear systems using factorization. As a result, it becomes very challenging to further scale up these two methods, in particular to take advantage of distributed computing. Recently, there is a recent trend on developing first-order methods for LP only utilizing matrix-vector multiplication <|cite_start|> (Reference: {Eclipse: 一千个读者有一个个哈姆雷特。对于“什么是Eclipse”这个问题,也会有形形色色的答案。你可以说它是一个出色(而且免费)的java IDE,也可以说它是一个用于发IDE的平台,阴谋论者甚至可以对着它启动动画面上的“Gopyrigt IBM Corp”字样说它是IBM侵蚀open source社群的一匹特洛伊木马。不过我不打算在这里纠缠。于我,一个j2EE的敏捷开发而言,Eclipse就是一件称手的工具,是开启敏捷之门的那把钥匙。) <|cite_end|> <|cite_start|> (Reference: Interior point methods 25 years later: ) <|cite_end|> <|cite_start|> (Reference: An admm-based interior-point method for large-scale linear programming: ABSTRACT In this paper, we propose a new framework to implement interior point method (IPM) in order to solve some very large-scale linear programs (LPs). Traditional IPMs typically use Newton's method to approximately solve a subproblem that aims to minimize a log-barrier penalty function at each iteration. Due its connection to Newton's method, IPM is often classified as second-order method – a genre that is attached with stability and accuracy at the expense of scalability. Indeed, computing a Newton step amounts to solving a large system of linear equations, which can be efficiently implemented if the input data are reasonably sized and/or sparse and/or well-structured. However, in case the above premises fail, then the challenge still stands on the way for a traditional IPM. To deal with this challenge, one approach is to apply the iterative procedure, such as preconditioned conjugate gradient method, to solve the system of linear equations. Since the linear system is different in each iteration, it is difficult to find good pre-conditioner to achieve the overall solution efficiency. In this paper, an alternative approach is proposed. Instead of applying Newton's method, we resort to the alternating direction method of multipliers (ADMM) to approximately minimize the log-barrier penalty function at each iteration, under the framework of primal–dual path-following for a homogeneous self-dual embedded LP model. The resulting algorithm is an ADMM-Based Interior Point Method, abbreviated as ABIP in this paper. The new method inherits stability from IPM and scalability from ADMM. Because of its self-dual embedding structure, ABIP is set to solve any LP without requiring prior knowledge about its feasibility. We conduct extensive numerical experiments testing ABIP with large-scale LPs from NETLIB and machine learning applications. The results demonstrate that ABIP compares favourably with other LP solvers including SDPT3, MOSEK, DSDP-CG and SCS.) <|cite_end|> <|cite_start|> (Reference: Practical Large-Scale Linear Programming using Primal-Dual Hybrid
Gradient: We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications. PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized by Chambolle and Pock (2011), to a saddle-point formulation of LP. PDLP enhances PDHG for LP by combining several new techniques with older tricks from the literature; the enhancements include diagonal preconditioning, presolving, adaptive step sizes, and adaptive restarting. PDLP improves the state of the art for first-order methods applied to LP. We compare PDLP with SCS, an ADMM-based solver, on a set of 383 LP instances derived from MIPLIB 2017. With a target of $10^{-8}$ relative accuracy and 1 hour time limit, PDLP achieves a 6.3x reduction in the geometric mean of solve times and a 4.6x reduction in the number of instances unsolved (from 227 to 49). Furthermore, we highlight standard benchmark instances and a large-scale application (PageRank) where our open-source prototype of PDLP, written in Julia, outperforms a commercial LP solver.) <|cite_end|> <|cite_start|> (Reference: Infeasibility detection with primal-dual hybrid gradient for large-scale linear programming: We study the problem of detecting infeasibility of large-scale linear programming problems using the primal-dual hybrid gradient method (PDHG) of Chambolle and Pock (2011). The literature on PDHG has mostly focused on settings where the problem at hand is assumed to be feasible. When the problem is not feasible, the iterates of the algorithm do not converge. In this scenario, we show that the iterates diverge at a controlled rate towards a well-defined ray. The direction of this ray is known as the infimal displacement vector $v$. The first contribution of our work is to prove that this vector recovers certificates of primal and dual infeasibility whenever they exist. Based on this fact, we propose a simple way to extract approximate infeasibility certificates from the iterates of PDHG. We study three different sequences that converge to the infimal displacement vector: the difference of iterates, the normalized iterates, and the normalized average. All of them are easy to compute, and thus the approach is suitable for large-scale problems. Our second contribution is to establish tight convergence rates for these sequences. We demonstrate that the normalized iterates and the normalized average achieve a convergence rate of $O(1/k)$, improving over the known rate of $O(1/\sqrt{k})$. This rate is general and applies to any fixed-point iteration of a nonexpansive operator. Thus, it is a result of independent interest since it covers a broad family of algorithms, including, for example, ADMM, and can be applied settings beyond linear programming, such as quadratic and semidefinite programming. Further, in the case of linear programming we show that, under nondegeneracy assumptions, the iterates of PDHG identify the active set of an auxiliary feasible problem in finite time, which ensures that the difference of iterates exhibits eventual linear convergence to the infimal displacement vector.) <|cite_end|>. In general, these methods are easy to be parallelized and do not need to store the factorization in memory.
The traditional results of first-order methods for LP usually have sublinear rate, due to the lack of strong convexity, which prevents them to identify high-accuracy solutions. To deal with this issue, <|cite_start|> (Reference: An alternating direction method for linear programming: This paper presents a new, simple, massively parallel algorithm for linear programming, called the alternating step method. The algorithm is unusual in that it does not maintain primal feasibility, dual feasibility, or complementary slackness; rather, all these conditions are gradually met as the method proceeds. We derive the algorithm from an extension of the alternating direction method of multipliers for convex programming, giving a new algorithm for monotropic programming in the course of the development. Concentrating on the linear programming case, we give a proof that, under a simple condition on the algorithm parameters, the method converges at a globally linear rate. Finally, we give some preliminary computational results.) <|cite_end|>presents a variant of ADMM and shows the linear convergence of the proposed method for LP. More recently <|cite_start|> (Reference: Partial smoothness and constant rank: The idea of partial smoothness in optimization blends certain smooth and nonsmooth properties of feasible regions and objective functions. As a consequence, the standard first-order conditions guarantee that diverse iterative algorithms (and post-optimality analyses) identify active structure or constraints. However, by instead focusing directly on the first-order conditions, the formal concept of partial smoothness simplifies dramatically: in basic differential geometric language, it is just a constant-rank condition. In this view, partial smoothness extends to more general mappings, such as saddlepoint operators underlying primal-dual splitting algorithms.) <|cite_end|> <|cite_start|> (Reference: Local linear convergence analysis of Primal--Dual splitting methods: Abstract In this paper, we study the local linear convergence properties of a versatile class of Primal–Dual splitting methods for minimizing composite non-smooth convex optimization problems. Under the assumption that the non-smooth components of the problem are partly smooth relative to smooth manifolds, we present a unified local convergence analysis framework for these methods. More precisely, in our framework, we first show that (i) the sequences generated by Primal–Dual splitting methods identify a pair of primal and dual smooth manifolds in a finite number of iterations, and then (ii) enter a local linear convergence regime, which is characterized based on the structure of the underlying active smooth manifolds. We also show how our results for Primal–Dual splitting can be specialized to cover existing ones on Forward–Backward splitting and Douglas–Rachford splitting/ADMM (alternating direction methods of multipliers). Moreover, based on these obtained local convergence analysis result, several practical acceleration techniques are discussed. To exemplify the usefulness of the obtained result, we consider several concrete numerical experiments arising from fields including signal/image processing, inverse problems and machine learning. The demonstration not only verifies the local linear convergence behaviour of Primal–Dual splitting methods, but also the insights on how to accelerate them in practice.) <|cite_end|>show that many primal-dual algorithms under a mild non-degeneracy condition have eventual linear convergence, but it may take a long time before reaching the linear convergence regime. <|cite_start|> (Reference: Faster first-order primal-dual methods for linear programming using restarts and sharpness: ) <|cite_end|>propose a restarted scheme for LP in the primal-dual formulation. They introduce a sharpness condition for primal-dual problems based on the normalized duality gap and show that the primal-dual formulation of LP is sharp on any bounded region. Then they provide restarted schemes for sharp primal-dual problems and show that their proposed algorithms have the optimal linear convergence rate (in a class of deterministic first-order methods) when solving sharp problems.
\textbf{Sharpness conditions and restart schemes.} The concept of sharpness was first proposed by Polyak <|cite_start|> (Reference: Nonconvex Weak Sharp Minima on Riemannian Manifolds: ) <|cite_end|>on minimization problem. Recently, there is a trend of work on developing first-order method with faster convergence rates using sharpness. For example, shows linear convergence of restarted subgradient descent on sharp non-smooth functions and there are other works on sharp non-convex minimization <|cite_start|> (Reference: Subgradient Methods for Sharp Weakly Convex Functions: ) <|cite_end|>. Sharpness can also be viewed as a certain error bound condition <|cite_start|> (Reference: Sharpness, Restart, and Acceleration: The {\L}ojasiewicz inequality shows that H\"olderian error bounds on the minimum of convex optimization problems hold almost generically. Here, we clarify results of \citet{Nemi85} who show that H\"olderian error bounds directly controls the performance of restart schemes. The constants quantifying error bounds are of course unobservable, but we show that optimal restart strategies are robust, and searching for the best scheme only increases the complexity by a logarithmic factor compared to the optimal bound. Overall then, restart schemes generically accelerate accelerated methods.) <|cite_end|>. Recently <|cite_start|> (Reference: Faster first-order primal-dual methods for linear programming using restarts and sharpness: ) <|cite_end|>introduces sharpness condition for primal-dual problems. A highly related concept is the metric subregularity for variational inequalities, which is a weaker condition than the sharpness condition proposed in <|cite_start|> (Reference: Faster first-order primal-dual methods for linear programming using restarts and sharpness: ) <|cite_end|>(see Appendix \ref{sec:sharp-subreg} for a discussion). Under such conditions, <|cite_start|> (Reference: On the Convergence of Stochastic Primal-Dual Hybrid Gradient: In this paper, we analyze the recently proposed stochastic primal-dual hybrid gradient (SPDHG) algorithm and provide new theoretical results. In particular, we prove almost sure convergence of the iterates to a solution and linear convergence with standard step sizes, independent of strong convexity constants. Our assumption for linear convergence is metric subregularity, which is satisfied for smooth and strongly convex problems in addition to many nonsmooth and/or nonstrongly convex problems, such as linear programs, Lasso, and support vector machines. In the general convex case, we prove optimal sublinear rates for the ergodic sequence and for randomly selected iterate, without bounded domain assumptions. We also provide numerical evidence showing that SPDHG with standard step sizes shows favorable and robust practical performance against its specialized strongly convex variant SPDHG-$\mu$ and other state-of-the-art algorithms including variance reduction methods and stochastic dual coordinate ascent.) <|cite_end|> <|cite_start|> (Reference: Quadratic error bound of the smoothed gap and the restarted averaged primal-dual hybrid gradient: We study the linear convergence of the primal-dual hybrid gradient method. After a review of current analyses, we show that they do not explain properly the behavior of the algorithm, even on the most simple problems. We thus introduce the quadratic error bound of the smoothed gap, a new regularity assumption that holds for a wide class of optimization problems. Equipped with this tool, we manage to prove tighter convergence rates. Then, we show that averaging and restarting the primal-dual hybrid gradient allows us to leverage better the regularity constant. Numerical experiments on linear and quadratic programs, ridge regression and image denoising illustrate the findings of the paper.) <|cite_end|>present the linear convergence for stochastic PDHG and deterministic PDHG, respectively.
Restarting is a powerful technique in optimization. It can improve the practical and theoretical convergence of a base algorithm without modification to the base algorithm <|cite_start|> (Reference: Restarting Algorithms: Sometimes There Is Free Lunch: ) <|cite_end|>. Recently, there have been extensive works on this technique in smooth convex optimization <|cite_start|> (Reference: Adaptive Restart for Accelerated Gradient Schemes: ) <|cite_end|> <|cite_start|> (Reference: Sharpness, Restart, and Acceleration: The {\L}ojasiewicz inequality shows that H\"olderian error bounds on the minimum of convex optimization problems hold almost generically. Here, we clarify results of \citet{Nemi85} who show that H\"olderian error bounds directly controls the performance of restart schemes. The constants quantifying error bounds are of course unobservable, but we show that optimal restart strategies are robust, and searching for the best scheme only increases the complexity by a logarithmic factor compared to the optimal bound. Overall then, restart schemes generically accelerate accelerated methods.) <|cite_end|>, non-smooth convex optimization <|cite_start|> (Reference: New computational guarantees for solving convex optimization problems with first order methods, via a function growth condition measure: ) <|cite_end|>and stochastic convex optimization <|cite_start|> (Reference: Accelerating Stochastic Gradient Descent using
Predictive Variance Reduction: Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.) <|cite_end|> <|cite_start|> (Reference: A universal catalyst for first-order optimization: We introduce a generic scheme for accelerating first-order optimization methods in the sense of Nesterov, which builds upon a new analysis of the accelerated proximal point algorithm. Our approach consists of minimizing a convex objective by approximately solving a sequence of well-chosen auxiliary problems, leading to faster convergence. This strategy applies to a large class of algorithms, including gradient descent, block coordinate descent, SAG, SAGA, SDCA, SVRG, Finito/MISO, and their proximal variants. For all of these methods, we provide acceleration and explicit support for non-strongly convex objectives. In addition to theoretical speed-up, we also show that acceleration is useful in practice, especially for ill-conditioned problems where we measure significant improvements.) <|cite_end|> <|cite_start|> (Reference: Rest-katyusha: exploiting the solution's structure via scheduled restart schemes: We propose a structure-adaptive variant of the state-of-the-art stochastic variance-reduced gradient algorithm Katyusha for the regularized empirical risk minimization. The proposed method is able to exploit the intrinsic low-dimensional structure of the solution, such as sparsity and low-rank, which is enforced by the non-smooth regularization, to achieve even faster convergence rate. This algorithmic improvement is done by restarting the Katyusha algorithm at a certain carefully-chosen frequency according to a modified version of restricted strong-convexity. Our analysis demonstrates that the proposed method is globally convergent and enjoys a local accelerated linear rate with respect to the low-dimensional structure of the solution represented by the restricted strong-convexity, even when the cost function itself is not strongly-convex. Since in practice the restricted strong-convexity is usually unknown and hard to be estimated accurately, we proposed two practical restart schemes. The first one is restarting the algorithm with a rough restricted strong-convexity estimate which is provably robust but have a compromise on the convergence rate. The second variant is based on the adaptive restart via convergence speed check. The numerical results on benchmark datasets demonstrate the effectiveness of our approach.) <|cite_end|>. For sharp primal-dual problems, <|cite_start|> (Reference: Faster first-order primal-dual methods for linear programming using restarts and sharpness: ) <|cite_end|>propose fixed-frequency and adaptive restart on a large class of base primal-dual algorithms including PDHG, ADMM and EGM.
\textbf{Variance reduction and primal-dual problem.}
Variance reduction technique <|cite_start|> (Reference: SAGA: A Fast Incremental Gradient Method With Support for Non-Strongly Convex Composite Objectives: In this work we introduce a new optimisation method called SAGA in the spirit of SAG, SDCA, MISO and SVRG, a set of recently proposed incremental gradient algorithms with fast linear convergence rates. SAGA improves on the theory behind SAG and SVRG, with better theoretical convergence rates, and has support for composite objectives where a proximal operator is used on the regulariser. Unlike SDCA, SAGA supports non-strongly convex problems directly, and is adaptive to any inherent strong convexity of the problem. We give experimental results showing the effectiveness of our method.) <|cite_end|> <|cite_start|> (Reference: Accelerating Stochastic Gradient Descent using
Predictive Variance Reduction: Stochastic gradient descent is popular for large scale optimization but has slow convergence asymptotically due to the inherent variance. To remedy this problem, we introduce an explicit variance reduction method for stochastic gradient descent which we call stochastic variance reduced gradient (SVRG). For smooth and strongly convex functions, we prove that this method enjoys the same fast convergence rate as those of stochastic dual coordinate ascent (SDCA) and Stochastic Average Gradient (SAG). However, our analysis is significantly simpler and more intuitive. Moreover, unlike SDCA or SAG, our method does not require the storage of gradients, and thus is more easily applicable to complex problems such as some structured prediction problems and neural network learning.) <|cite_end|>is developed to improve the convergence rate for stochastic algorithms upon pure SGD for minimization problems. There are extensive works on variants of stochastic variance reduction for minimization problems under various settings (see <|cite_start|> (Reference: Variance-Reduced Methods for Machine Learning: Stochastic optimization lies at the heart of machine learning, and its cornerstone is stochastic gradient descent (SGD), a method introduced over 60 years ago. The last 8 years have seen an exciting new development: variance reduction (VR) for stochastic optimization methods. These VR methods excel in settings where more than one pass through the training data is allowed, achieving a faster convergence than SGD in theory as well as practice. These speedups underline the surge of interest in VR methods and the fast-growing body of work on this topic. This review covers the key principles and main developments behind VR methods for optimization with finite data sets and is aimed at non-expert readers. We focus mainly on the convex setting, and leave pointers to readers interested in extensions for minimizing non-convex functions.) <|cite_end|>for a recent overview).
Compared with the extensive works on minimization problem, the research of variance-reduced methods on primal-dual problems is fairly limited. <|cite_start|> (Reference: Stochastic Variance Reduction Methods for Saddle-Point Problems: We consider convex-concave saddle-point problems where the objective functions may be split in many components, and extend recent stochastic variance reduction methods (such as SVRG or SAGA) to provide the first large-scale linearly convergent algorithms for this class of problems which is common in machine learning. While the algorithmic extension is straightforward, it comes with challenges and opportunities: (a) the convex minimization analysis does not apply and we use the notion of monotone operators to prove convergence, showing in particular that the same algorithm applies to a larger class of problems, such as variational inequalities, (b) there are two notions of splits, in terms of functions, or in terms of partial derivatives, (c) the split does need to be done with convex-concave terms, (d) non-uniform sampling is key to an efficient algorithm, both in theory and practice, and (e) these incremental algorithms can be easily accelerated using a simple extension of the "catalyst" framework, leading to an algorithm which is always superior to accelerated batch algorithms.) <|cite_end|>studies stochastic forward-backward algorithm with variance reduction for primal-dual problems and, more generally, monotone inclusions. Under strong monotonicity, they prove a linear convergence rate and improve the complexity of deterministic methods for bilinear problems. | [
"<|reference_start|> Practical Large-Scale Linear Programming using Primal-Dual Hybrid\nGradient: We present PDLP, a practical first-order method for linear programming (LP) that can solve to the high levels of accuracy that are expected in traditional LP applications. In addition, it can scale to very large problems because its core operation is matrix-vector multiplications. PDLP is derived by applying the primal-dual hybrid gradient (PDHG) method, popularized by Chambolle and Pock (2011), to a saddle-point formulation of LP. PDLP enhances PDHG for LP by combining several new techniques with older tricks from the literature; the enhancements include diagonal preconditioning, presolving, adaptive step sizes, and adaptive restarting. PDLP improves the state of the art for first-order methods applied to LP. We compare PDLP with SCS, an ADMM-based solver, on a set of 383 LP instances derived from MIPLIB 2017. With a target of $10^{-8}$ relative accuracy and 1 hour time limit, PDLP achieves a 6.3x reduction in the geometric mean of solve times and a 4.6x reduction in the number of instances unsolved (from 227 to 49). Furthermore, we highlight standard benchmark instances and a large-scale application (PageRank) where our open-source prototype of PDLP, written in Julia, outperforms a commercial LP solver. <|reference_end|>",
"<|reference_start|> Faster first-order primal-dual methods for linear programming using restarts and sharpness: <|reference_end|>",
"<|reference_start|> The extragradient method for finding saddle points and other problems: <|reference_end|>",
"<|reference_start|> A Unified Analysis of Extra-gradient and Optimistic Gradient Methods for Saddle Point Problems: Proximal Point Approach: In this paper we consider solving saddle point problems using two variants of Gradient Descent-Ascent algorithms, Extra-gradient (EG) and Optimistic Gradient Descent Ascent (OGDA) methods. We show that both of these algorithms admit a unified analysis as approximations of the classical proximal point method for solving saddle point problems. This viewpoint enables us to develop a new framework for analyzing EG and OGDA for bilinear and strongly convex-strongly concave settings. Moreover, we use the proximal point approximation interpretation to generalize the results for OGDA for a wide range of parameters. <|reference_end|>"
] | [
5,
21,
22,
45
] | {"<|multi_cite_1_1|>": "ss-1338326", "<|multi_cite_1_2|>": "ss-965467", "<|multi_cite_1_3|>": "ss-1338327", "<|multi_cite_1_4|>": "ss-1103474", "<|multi_cite_1_6|>": "ss-1364874", "<|cite_2|>": "ss-815997", "<|cite_3|>": "ss-736146", "<|multi_cite_4_1|>": "ss-815997", "<|multi_cite_4_2|>": "ss-1089120", "<|multi_cite_5_1|>": "ss-1068523", "<|multi_cite_5_2|>": "arxiv-62952", "<|multi_cite_5_3|>": "arxiv-50109", "<|multi_cite_6_1|>": "ss-1068523", "<|multi_cite_6_2|>": "arxiv-62952", "<|multi_cite_6_3|>": "arxiv-50109", "<|cite_7|>": "arxiv-321636", "<|multi_cite_8_1|>": "ss-1376334", "<|multi_cite_8_2|>": "ss-2301073", "<|multi_cite_8_4|>": "ss-1338328", "<|cite_9|>": "ss-678484", "<|cite_10|>": "ss-678484", "<|cite_11|>": "ss-678484", "<|cite_12|>": "ss-752236", "<|cite_13|>": "ss-678484", "<|cite_14|>": "arxiv-126868", "<|cite_15|>": "ss-1563208", "<|cite_16|>": "arxiv-212899", "<|cite_17|>": "arxiv-321636", "<|cite_18|>": "arxiv-321636", "<|cite_19|>": "ss-752236", "<|cite_20|>": "ss-678484", "<|cite_21|>": "arxiv-321636", "<|cite_22|>": "ss-678484", "<|cite_23|>": "arxiv-321636", "<|cite_24|>": "arxiv-321636", "<|cite_25|>": "ss-1931934", "<|cite_26|>": "ss-752236", "<|cite_27|>": "ss-2383888", "<|cite_28|>": "ss-805309", "<|cite_29|>": "ss-805309", "<|multi_cite_30_1|>": "ss-854573", "<|multi_cite_30_2|>": "ss-1962817", "<|cite_31|>": "ss-786645", "<|cite_32|>": "ss-1374057", "<|cite_33|>": "arxiv-188747", "<|cite_34|>": "arxiv-188747", "<|cite_35|>": "ss-749414", "<|cite_36|>": "ss-678484", "<|cite_37|>": "ss-1536191", "<|cite_38|>": "ss-947086", "<|multi_cite_39_1|>": "ss-1114029", "<|multi_cite_39_2|>": "ss-1985299", "<|multi_cite_39_3|>": "ss-1089120", "<|multi_cite_39_4|>": "ss-815997", "<|multi_cite_39_5|>": "ss-1544673", "<|cite_40|>": "ss-2513294", "<|multi_cite_41_1|>": "ss-1338329", "<|multi_cite_41_2|>": "ss-1338330", "<|cite_42|>": "ss-678484", "<|cite_43|>": "ss-1338331", "<|cite_45|>": "ss-1521882", "<|cite_46|>": "ss-985445", "<|cite_47|>": "ss-678484", "<|cite_48|>": "ss-678484", "<|multi_cite_49_1|>": "ss-1563208", "<|multi_cite_49_2|>": "ss-678485", "<|cite_50|>": "ss-1338328", "<|multi_cite_51_1|>": "ss-1376334", "<|multi_cite_51_2|>": "ss-985445", "<|multi_cite_52_1|>": "ss-2301073", "<|multi_cite_53_1|>": "ss-1068523", "<|multi_cite_53_2|>": "ss-1263411", "<|multi_cite_53_3|>": "ss-847627", "<|cite_54|>": "ss-678484", "<|multi_cite_55_1|>": "arxiv-62952", "<|multi_cite_55_2|>": "ss-1068523", "<|cite_56|>": "arxiv-293397", "<|cite_57|>": "arxiv-98330", "<|cite_58|>": "arxiv-212899", "<|cite_59|>": "arxiv-321636", "<|multi_cite_60_1|>": "arxiv-79238", "<|multi_cite_60_2|>": "ss-925051", "<|cite_61|>": "arxiv-212899"} |
2310.01860-1 | <|cite_start|> (Reference: Stochastic Optimization with Heavy-Tailed Noise via Accelerated Gradient Clipping: In this paper, we propose a new accelerated stochastic first-order method called clipped-SSTM for smooth convex stochastic optimization with heavy-tailed distributed noise in stochastic gradients and derive the first high-probability complexity bounds for this method closing the gap in the theory of stochastic optimization with heavy-tailed noise. Our method is based on a special variant of accelerated Stochastic Gradient Descent (SGD) and clipping of stochastic gradients. We extend our method to the strongly convex case and prove new complexity bounds that outperform state-of-the-art results in this case. Finally, we extend our proof technique and derive the first non-trivial high-probability complexity bounds for SGD with clipping without light-tails assumption on the noise.) <|cite_end|>for smooth convex and strongly convex minimization problems. <|cite_start|> (Reference: High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise: Stochastic first-order methods are standard for training large-scale machine learning models. Random behavior may cause a particular run of an algorithm to result in a highly suboptimal objective value, whereas theoretical guarantees are usually proved for the expectation of the objective value. Thus, it is essential to theoretically guarantee that algorithms provide small objective residual with high probability. Existing methods for non-smooth stochastic convex optimization have complexity bounds with the dependence on the confidence level that is either negative-power or logarithmic but under an additional assumption of sub-Gaussian (light-tailed) noise distribution that may not hold in practice. In our paper, we resolve this issue and derive the first high-probability convergence results with logarithmic dependence on the confidence level for non-smooth convex stochastic optimization problems with non-sub-Gaussian (heavy-tailed) noise. To derive our results, we propose novel stepsize rules for two stochastic methods with gradient clipping. Moreover, our analysis works for generalized smooth objectives with H\"older-continuous gradients, and for both methods, we provide an extension for strongly convex problems. Finally, our results imply that the first (accelerated) method we consider also has optimal iteration and oracle complexity in all the regimes, and the second one is optimal in the non-smooth setting.) <|cite_end|>tightens them and generalizes to the case of problems with H\"older-continuous gradients and <|cite_start|> (Reference: Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise: Stochastic first-order methods such as Stochastic Extragradient (SEG) or Stochastic Gradient Descent-Ascent (SGDA) for solving smooth minimax problems and, more generally, variational inequality problems (VIP) have been gaining a lot of attention in recent years due to the growing popularity of adversarial formulations in machine learning. However, while high-probability convergence bounds are known to reflect the actual behavior of stochastic methods more accurately, most convergence results are provided in expectation. Moreover, the only known high-probability complexity results have been derived under restrictive sub-Gaussian (light-tailed) noise and bounded domain assumption [Juditsky et al., 2011]. In this work, we prove the first high-probability complexity results with logarithmic dependence on the confidence level for stochastic methods for solving monotone and structured non-monotone VIPs with non-sub-Gaussian (heavy-tailed) noise and unbounded domains. In the monotone case, our results match the best-known ones in the light-tails case [Juditsky et al., 2011], and are novel for structured non-monotone problems such as negative comonotone, quasi-strongly monotone, and/or star-cocoercive ones. We achieve these results by studying SEG and SGDA with clipping. In addition, we numerically validate that the gradient noise of many practical GAN formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA.) <|cite_end|>derives high-probability convergence rates in the case of VIPs. <|cite_start|> (Reference: High-Probability Bounds for Stochastic Optimization and Variational Inequalities: the Case of Unbounded Variance: During recent years the interest of optimization and machine learning communities in high-probability convergence of stochastic optimization methods has been growing. One of the main reasons for this is that high-probability complexity bounds are more accurate and less studied than in-expectation ones. However, SOTA high-probability non-asymptotic convergence results are derived under strong assumptions such as the boundedness of the gradient noise variance or of the objective's gradient itself. In this paper, we propose several algorithms with high-probability convergence results under less restrictive assumptions. In particular, we derive new high-probability convergence results under the assumption that the gradient/operator noise has bounded central $\alpha$-th moment for $\alpha \in (1,2]$ in the following setups: (i) smooth non-convex / Polyak-Lojasiewicz / convex / strongly convex / quasi-strongly convex minimization problems, (ii) Lipschitz / star-cocoercive and monotone / quasi-strongly monotone variational inequalities. These results justify the usage of the considered methods for solving problems that do not fit standard functional classes studied in stochastic optimization.) <|cite_end|>relaxes the assumption of bounded variance to Assumption~\ref{as:bounded_alpha_moment} for all problem classes mentioned above, and the results under the same assumption are also derived for \algname{clipped-SGD} (without acceleration) by <|cite_start|> (Reference: High probability convergence of clipped-sgd under heavy-tailed noise: While the convergence behaviors of stochastic gradient methods are well understood \emph{in expectation}, there still exist many gaps in the understanding of their convergence with \emph{high probability}, where the convergence rate has a logarithmic dependency on the desired success probability parameter. In the \emph{heavy-tailed noise} setting, where the stochastic gradient noise only has bounded $p$-th moments for some $p\in(1,2]$, existing works could only show bounds \emph{in expectation} for a variant of stochastic gradient descent (SGD) with clipped gradients, or high probability bounds in special cases (such as $p=2$) or with extra assumptions (such as the stochastic gradients having bounded non-central moments). In this work, using a novel analysis framework, we present new and time-optimal (up to logarithmic factors) \emph{high probability} convergence bounds for SGD with clipping under heavy-tailed noise for both convex and non-convex smooth objectives using only minimal assumptions.) <|cite_end|>in the convex and non-convex cases.
\paragraph{High-probability bounds for composite convex problems.} <|cite_start|> (Reference: Algorithms of Robust Stochastic Optimization Based on Mirror Descent Method: ) <|cite_end|>propose a truncated version of Mirror Descent for convex and strongly convex composite problems and prove non-accelerated rates of convergence under bounded variance and \emph{bounded domain} assumptions. Accelerated results under bounded variance assumption for strongly convex composite problems are proven by <|cite_start|> (Reference: From low probability to high confidence in stochastic convex optimization: Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. More nuanced high probability guarantees are rare, and typically either rely on "light-tail" noise assumptions or exhibit worse sample complexity. In this work, we show that a wide class of stochastic optimization algorithms for strongly convex problems can be augmented with high confidence bounds at an overhead cost that is only logarithmic in the confidence level and polylogarithmic in the condition number. The procedure we propose, called proxBoost, is elementary and builds on two well-known ingredients: robust distance estimation and the proximal point method. We discuss consequences for both streaming (online) algorithms and offline algorithms based on empirical risk minimization.) <|cite_end|>, who propose an approach based on robust distance estimation. Since this approach requires solving some auxiliary problem at each iteration of the method, the complexity bound from <|cite_start|> (Reference: From low probability to high confidence in stochastic convex optimization: Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. More nuanced high probability guarantees are rare, and typically either rely on "light-tail" noise assumptions or exhibit worse sample complexity. In this work, we show that a wide class of stochastic optimization algorithms for strongly convex problems can be augmented with high confidence bounds at an overhead cost that is only logarithmic in the confidence level and polylogarithmic in the condition number. The procedure we propose, called proxBoost, is elementary and builds on two well-known ingredients: robust distance estimation and the proximal point method. We discuss consequences for both streaming (online) algorithms and offline algorithms based on empirical risk minimization.) <|cite_end|>contains extra logarithmic factors independent of the confidence level. Finally, in their very recent work, <|cite_start|> (Reference: Improved convergence in high probability of clipped gradient methods with heavy tails: In this work, we study the convergence \emph{in high probability} of clipped gradient methods when the noise distribution has heavy tails, ie., with bounded $p$th moments, for some $1<p\le2$. Prior works in this setting follow the same recipe of using concentration inequalities and an inductive argument with union bound to bound the iterates across all iterations. This method results in an increase in the failure probability by a factor of $T$, where $T$ is the number of iterations. We instead propose a new analysis approach based on bounding the moment generating function of a well chosen supermartingale sequence. We improve the dependency on $T$ in the convergence guarantee for a wide range of algorithms with clipped gradients, including stochastic (accelerated) mirror descent for convex objectives and stochastic gradient descent for nonconvex objectives. This approach naturally allows the algorithms to use time-varying step sizes and clipping parameters when the time horizon is unknown, which appears impossible in prior works. We show that in the case of clipped stochastic mirror descent, problem constants, including the initial distance to the optimum, are not required when setting step sizes and clipping parameters.) <|cite_end|>prove high-probability convergence for Clipped Stochastic Mirror Descent (\algname{Clipped-SMD}) for \emph{convex} composite problems. Moreover, the authors also propose Accelerated \algname{Clipped-SMD} (\algname{Clipped-ASMD}) and show that the algorithm is indeed accelerated \emph{but only under the additional assumption that $\nabla f(x^*) = 0$}. <|paper_end|> | [
"<|reference_start|> Clipped Stochastic Methods for Variational Inequalities with Heavy-Tailed Noise: Stochastic first-order methods such as Stochastic Extragradient (SEG) or Stochastic Gradient Descent-Ascent (SGDA) for solving smooth minimax problems and, more generally, variational inequality problems (VIP) have been gaining a lot of attention in recent years due to the growing popularity of adversarial formulations in machine learning. However, while high-probability convergence bounds are known to reflect the actual behavior of stochastic methods more accurately, most convergence results are provided in expectation. Moreover, the only known high-probability complexity results have been derived under restrictive sub-Gaussian (light-tailed) noise and bounded domain assumption [Juditsky et al., 2011]. In this work, we prove the first high-probability complexity results with logarithmic dependence on the confidence level for stochastic methods for solving monotone and structured non-monotone VIPs with non-sub-Gaussian (heavy-tailed) noise and unbounded domains. In the monotone case, our results match the best-known ones in the light-tails case [Juditsky et al., 2011], and are novel for structured non-monotone problems such as negative comonotone, quasi-strongly monotone, and/or star-cocoercive ones. We achieve these results by studying SEG and SGDA with clipping. In addition, we numerically validate that the gradient noise of many practical GAN formulations is heavy-tailed and show that clipping improves the performance of SEG/SGDA. <|reference_end|>",
"<|reference_start|> High probability convergence of clipped-sgd under heavy-tailed noise: While the convergence behaviors of stochastic gradient methods are well understood \\emph{in expectation}, there still exist many gaps in the understanding of their convergence with \\emph{high probability}, where the convergence rate has a logarithmic dependency on the desired success probability parameter. In the \\emph{heavy-tailed noise} setting, where the stochastic gradient noise only has bounded $p$-th moments for some $p\\in(1,2]$, existing works could only show bounds \\emph{in expectation} for a variant of stochastic gradient descent (SGD) with clipped gradients, or high probability bounds in special cases (such as $p=2$) or with extra assumptions (such as the stochastic gradients having bounded non-central moments). In this work, using a novel analysis framework, we present new and time-optimal (up to logarithmic factors) \\emph{high probability} convergence bounds for SGD with clipping under heavy-tailed noise for both convex and non-convex smooth objectives using only minimal assumptions. <|reference_end|>",
"<|reference_start|> From low probability to high confidence in stochastic convex optimization: Standard results in stochastic convex optimization bound the number of samples that an algorithm needs to generate a point with small function value in expectation. More nuanced high probability guarantees are rare, and typically either rely on \"light-tail\" noise assumptions or exhibit worse sample complexity. In this work, we show that a wide class of stochastic optimization algorithms for strongly convex problems can be augmented with high confidence bounds at an overhead cost that is only logarithmic in the confidence level and polylogarithmic in the condition number. The procedure we propose, called proxBoost, is elementary and builds on two well-known ingredients: robust distance estimation and the proximal point method. We discuss consequences for both streaming (online) algorithms and offline algorithms based on empirical risk minimization. <|reference_end|>",
"<|reference_start|> Improved convergence in high probability of clipped gradient methods with heavy tails: In this work, we study the convergence \\emph{in high probability} of clipped gradient methods when the noise distribution has heavy tails, ie., with bounded $p$th moments, for some $1<p\\le2$. Prior works in this setting follow the same recipe of using concentration inequalities and an inductive argument with union bound to bound the iterates across all iterations. This method results in an increase in the failure probability by a factor of $T$, where $T$ is the number of iterations. We instead propose a new analysis approach based on bounding the moment generating function of a well chosen supermartingale sequence. We improve the dependency on $T$ in the convergence guarantee for a wide range of algorithms with clipped gradients, including stochastic (accelerated) mirror descent for convex objectives and stochastic gradient descent for nonconvex objectives. This approach naturally allows the algorithms to use time-varying step sizes and clipping parameters when the time horizon is unknown, which appears impossible in prior works. We show that in the case of clipped stochastic mirror descent, problem constants, including the initial distance to the optimum, are not required when setting step sizes and clipping parameters. <|reference_end|>"
] | [
2,
4,
7,
8
] | {"<|cite_1|>": "arxiv-266978", "<|multi_cite_2_1|>": "ss-2434299", "<|multi_cite_2_2|>": "arxiv-216876", "<|multi_cite_2_3|>": "arxiv-266978", "<|multi_cite_2_4|>": "arxiv-424330", "<|multi_cite_2_5|>": "arxiv-351329", "<|multi_cite_2_6|>": "arxiv-478644", "<|multi_cite_2_7|>": "ss-812272", "<|multi_cite_2_8|>": "ss-812273", "<|multi_cite_2_9|>": "arxiv-485043", "<|cite_3|>": "ss-1282301", "<|multi_cite_4_1|>": "ss-920907", "<|multi_cite_4_2|>": "ss-828266", "<|multi_cite_4_3|>": "ss-1375482", "<|cite_5|>": "arxiv-38411", "<|multi_cite_6_1|>": "arxiv-266978", "<|multi_cite_6_2|>": "arxiv-478644", "<|cite_7|>": "ss-828266", "<|cite_8|>": "ss-957869", "<|cite_9|>": "ss-812274", "<|multi_cite_10_1|>": "arxiv-321636", "<|multi_cite_10_2|>": "arxiv-399228", "<|multi_cite_11_1|>": "arxiv-266978", "<|multi_cite_11_2|>": "arxiv-478644", "<|cite_12|>": "ss-1375482", "<|cite_13|>": "arxiv-478644", "<|cite_14|>": "arxiv-238137", "<|cite_15|>": "arxiv-478644", "<|cite_16|>": "ss-699297", "<|cite_17|>": "ss-2276371", "<|multi_cite_18_1|>": "arxiv-104593", "<|multi_cite_18_2|>": "arxiv-325876", "<|multi_cite_18_3|>": "arxiv-352162", "<|cite_19|>": "arxiv-352162", "<|cite_20|>": "arxiv-478644", "<|cite_21|>": "arxiv-478644", "<|cite_22|>": "arxiv-478644", "<|cite_23|>": "ss-1352689", "<|cite_24|>": "ss-1352689", "<|cite_25|>": "arxiv-478644", "<|cite_26|>": "arxiv-478644", "<|multi_cite_27_1|>": "ss-1304002", "<|multi_cite_27_2|>": "ss-752238", "<|multi_cite_27_3|>": "ss-1263691", "<|cite_28|>": "ss-2434299", "<|cite_29|>": "arxiv-266978", "<|cite_30|>": "ss-1837155", "<|cite_31|>": "arxiv-424330", "<|cite_32|>": "arxiv-478644", "<|cite_33|>": "ss-812272", "<|cite_34|>": "ss-2434299", "<|cite_35|>": "arxiv-216876", "<|cite_36|>": "arxiv-216876", "<|cite_37|>": "ss-1352689"} |
1501.02516 | <|paper_start|> Title: Beam-searching and Transmission Scheduling in Millimeter Wave Communications
Abstract: Beam-searching and Transmission Scheduling in Millimeter Wave Communications: Millimeter wave (mmW) wireless networks are capable to support multi-gigabit data rates, by using directional communications with narrow beams. However, existing mmW communications standards are hindered by two problems: deafness and single link scheduling. The deafness problem, that is, a misalignment between transmitter and receiver beams, demands a time consuming beam-searching operation, which leads to an alignment-throughput tradeoff. Moreover, the existing mmW standards schedule a single link in each time slot and hence do not fully exploit the potential of mmW communications, where directional communications allow multiple concurrent transmissions. These two problems are addressed in this paper, where a joint beamwidth selection and power allocation problem is formulated by an optimization problem for short range mmW networks with the objective of maximizing effective network throughput. This optimization problem allows establishing the fundamental alignment-throughput tradeoff, however it is computationally complex and requires exact knowledge of network topology, which may not be available in practice. Therefore, two standard-compliant approximation solution algorithms are developed, which rely on underestimation and overestimation of interference. The first one exploits directionality to maximize the reuse of available spectrum and thereby increases the network throughput, while imposing almost no computational complexity. The second one is a more conservative approach that protects all active links from harmful interference, yet enhances the network throughput by 100% compared to the existing standards. Extensive performance analysis provides useful insights on the directionality level and the number of concurrent transmissions that should be pursued. Interestingly, extremely narrow beams are in general not optimal.
Introduction
\label{sec: introductions}
Millimeter wave (mmW) communications appear as a promising option to meet the ever growing demand for multi-gigabit data rates, at least over short distances. MmW communications refer to the electromagnetic spectrum between 30 and 300~GHz, which corresponds to wavelengths from 10~mm to 1~mm. Small wavelength enables integration of numerous antenna elements in the current size of radio chips, which in turn promises a significant directivity gain. The main characteristics of mmW are directionality, large bandwidth, but also high attenuation <|cite_start|> (Reference: Millimeter Wave Cellular Wireless Networks: Potentials and Challenges: Millimeter wave (mmW) frequencies between 30 and 300 GHz are a new frontier for cellular communication that offers the promise of orders of magnitude greater bandwidths combined with further gains via beamforming and spatial multiplexing from multi-element antenna arrays. This paper surveys measurements and capacity studies to assess this technology with a focus on small cell deployments in urban environments. The conclusions are extremely encouraging; measurements in New York City at 28 and 73 GHz demonstrate that, even in an urban canyon environment, significant non-line-of-sight (NLOS) outdoor, street-level coverage is possible up to approximately 200 m from a potential low power micro- or picocell base station. In addition, based on statistical channel models from these measurements, it is shown that mmW systems can offer more than an order of magnitude increase in capacity over current state-of-the-art 4G cellular networks at current cell densities. Cellular systems, however, will need to be significantly redesigned to fully achieve these gains. Specifically, the requirement of highly directional and adaptive transmissions, directional isolation between links and significant possibilities of outage have strong implications on multiple access, channel structure, synchronization and receiver design. To address these challenges, the paper discusses how various technologies including adaptive beamforming, multihop relaying, heterogeneous network architectures and carrier aggregation can be leveraged in the mmW context.) <|cite_end|>.
MmW has been considered lately by several standardization bodies as an ideal candidate for short range communications. Specifically, IEEE~802.15.3 task group 3c works on the development of high rate wireless personal area networks (WPAN), whereas IEEE~802.11ad task group is focused on wireless local area networks (WLAN). In existing standards, one of the network devices is assigned the role of the coordinator, who schedules transmissions in a centralized manner. In particular, channel access is determined through a hybrid carrier sense multiple access/collision avoidance (CSMA/CA) and time division multiple access (TDMA) scheme. A superframe consists of three phases. A beacon period, a contention access period, where devices compete to register their requests for channel access to the coordinator, and a channel time allocation period, which is further divided into several time slots and each is assigned to a \emph{single} transmitter-receiver pair.
Existing standards do not exploit the full potential of mmW communications. In fact, high data rates are achieved due to the high signal-to-noise ratio (SNR), which is a result of directional communications, and extended bandwidth availability in mmW bands. Pencil beams, however, promises extensive frequency reuse while simplifies interference management <|cite_start|> (Reference: Directional MAC protocol for millimeter wave based wireless personal area networks: Recently, up to 7 GHz license-free spectrum around 60 GHz has been allocated worldwide for high data rate wireless communications. This enables the deployment of WPANs at 60 GHz for short-range multimedia applications up to gigabits per second. In this paper we propose a new scheme to increase the efficiency of MAC layer protocol for WPANs at 60 GHz when directional antennas are used. Our scheme is based on an adaptation of the current IEEE 15.3 standard MAC protocol for WPANs on two aspects. Firstly, we propose a rate-adaptation based scheme to coordinate the directional and omni-directional transmissions in WPANs. Secondly, we propose a novel channel time allocation algorithm, which enables spatial reuse TDMA. The analytical results reveal that our algorithm significantly increases the system capacity.) <|cite_end|> <|cite_start|> (Reference: Interference analysis for highly directional 60-GHz mesh networks: the case for rethinking medium access control: We investigate spatial interference statistics for multigigabit outdoor mesh networks operating in the unlicensed 60-GHz “millimeter (mm) wave” band. The links in such networks are highly directional: Because of the small carrier wavelength (an order of magnitude smaller than those for existing cellular and wireless local area networks), narrow beams are essential for overcoming higher path loss and can be implemented using compact electronically steerable antenna arrays. Directionality drastically reduces interference, but it also leads to “deafness,” making implicit coordination using carrier sense infeasible. In this paper, we make a quantitative case for rethinking medium access control (MAC) design in such settings. Unlike existing MAC protocols for omnidirectional networks, where the focus is on interference management, we contend that MAC design for 60-GHz mesh networks can essentially ignore interference and must focus instead on the challenge of scheduling half-duplex transmissions with deaf neighbors. Our main contribution is an analytical framework for estimating the collision probability in such networks as a function of the antenna patterns and the density of simultaneously transmitting nodes. The numerical results from our interference analysis show that highly directional links can indeed be modeled as pseudowired, in that the collision probability is small even with a significant density of transmitters. Furthermore, simulation of a rudimentary directional slotted Aloha protocol shows that packet losses due to failed coordination are an order of magnitude higher than those due to collisions, confirming our analytical results and highlighting the need for more sophisticated coordination mechanisms.) <|cite_end|>.
In this paper, we suggest that efficient transmission scheduling mechanisms could significantly improve the network throughput (spectral efficiency), by scheduling multiple transmissions in the same time slot, as long as they do not cause harmful interference to each other. The amount of interference caused depends also on the beamwidths that devices operate with. This introduces an alignment-throughput tradeoff. That is, a narrower beamwidth introduces significant searching overhead, since many directions have to be searched, but provides a higher transmission rate due to higher directivity gains, whereas larger beamwidths speed up search process at the expense of loss in the transmission rate. In order to address those problems, we propose a joint formulation of the beamwidth selection and transmission scheduling problems in mmW communications, and analyze the impact of each of the system parameters on the network throughput.
\subsection{Related Work}
Our work is focused on capturing the major tradeoffs in mmW communications, which are mainly due to beam-searching overhead and concurrent transmission scheduling. In the following, we present an overview of existing works in the field.
A main issue in mmW communications is deafness, which is a direct consequence of directional transmission and reception. It occurs when the main beams of a transmitter and the intended receiver are not aligned. To address this issue, beam-searching has been proposed to establish a communication link <|cite_start|> (Reference: On the Efficient Beam-Forming Training for 60GHz Wireless Personal Area Networks: In this article, we suggest an efficient beam switching technique for the emerging 60GHz wireless personal area networks. Given the pre-specified beam codebooks, the beam switching process, aiming to identify the best beam-pair for data transmissions, is formulated as a global optimization problem in a two-dimension plane that is formed by the potential beam pattern index. As the analytical gradient information of the objective reward function is practically unavailable, Rosenbrock numerical algorithm is properly adopted to implement beam searching, by implicitly approaching and exploiting the gradient descent direction through the numerical pattern-search mechanic. In order to enhance search performance, furthermore, a novel initialization process is presented to provide the feasible initial solution for Rosenbrock search. Inspired by the appealing conception of small-region dividing and conquering, this pre-search algorithm can efficiently reduce the search scope and hence improve the success probability. The developed beam switching technique, i.e. an initialization process followed by Rosenbrock search, exhibits a much lower complexity than the current state-of-the-art strategies. It is demonstrated from both theoretical analysis and numerical experiments that, compared with the existing popular methods, the required protocol overhead of the new beam-training procedure can be significantly reduced, accompanying the power consumption of 60GHz devices.) <|cite_end|> <|cite_start|> (Reference: Beam codebook based beamforming protocol for multi-gbps millimeter-wave WPAN systems: In order to realize high speed, long range, reliable transmission in millimeter-wave 60 GHz wireless personal area networks (60 GHz WPANs), we propose a beamforming (BF) protocol realized in media access control (MAC) layer on top of multiple physical layer (PHY) designs. The proposed BF protocol targets to minimize the BF set-up time and to mitigate the high path loss of 60 GHz WPAN systems. It consists of 3 stages, namely the device (DEV) to DEV linking, sector-level searching and beam-level searching. The division of the stages facilitates significant reduction in setup time as compared to BF protocols with exhaustive searching mechanisms. The proposed BF protocol employs discrete phase-shifters, which significantly simplifies the structure of DEVs as compared to the conventional BF with phase-and-amplitude adjustment, at the expense of a gain degradation of less than 1 dB. The proposed BF protocol is a complete design and PHY-independent, it is applicable to different antenna configurations. Simulation results show that the setup time of the proposed BF protocol is as small as 2% when compared to the exhaustive searching protocol. Furthermore, based on the codebooks with four phases per element, around 15.1 dB gain is achieved by using eight antenna elements at both transmitter and receiver, thereby enabling 1.6 Gbps-data-streaming over a range of three meters. Due to the flexibility in supporting multiple PHY layer designs, the proposed protocol has been adopted by the IEEE 802.15.3c as an optional functionality to realize Gbps communication systems.) <|cite_end|>. In this case, an exhaustive search over all possible combinations of transmission and reception directions is performed through a sequence of pilot transmissions.
In fact, mmW devices adopt analog beamforming, also called beam-searching, using simple phase shifters, rather than a complex digital beamforming based on instantaneous channel state information, since the latter would impose formidable complexity in mmW due to the large number of antennas <|cite_start|> (Reference: Millimeter Wave Cellular Wireless Networks: Potentials and Challenges: Millimeter wave (mmW) frequencies between 30 and 300 GHz are a new frontier for cellular communication that offers the promise of orders of magnitude greater bandwidths combined with further gains via beamforming and spatial multiplexing from multi-element antenna arrays. This paper surveys measurements and capacity studies to assess this technology with a focus on small cell deployments in urban environments. The conclusions are extremely encouraging; measurements in New York City at 28 and 73 GHz demonstrate that, even in an urban canyon environment, significant non-line-of-sight (NLOS) outdoor, street-level coverage is possible up to approximately 200 m from a potential low power micro- or picocell base station. In addition, based on statistical channel models from these measurements, it is shown that mmW systems can offer more than an order of magnitude increase in capacity over current state-of-the-art 4G cellular networks at current cell densities. Cellular systems, however, will need to be significantly redesigned to fully achieve these gains. Specifically, the requirement of highly directional and adaptive transmissions, directional isolation between links and significant possibilities of outage have strong implications on multiple access, channel structure, synchronization and receiver design. To address these challenges, the paper discusses how various technologies including adaptive beamforming, multihop relaying, heterogeneous network architectures and carrier aggregation can be leveraged in the mmW context.) <|cite_end|>. Although the beam-searching concept facilitates the beamforming phase, it introduces an alignment overhead, that is, the time required for finding the best beams.
This overhead depends on the number of directions that have to be searched, which in turn depends on the selected transmission and reception beamwidths.
Current standardization activities suggest a two-stage beam-search technique, to reduce alignment overhead and power consumption. Initially, a coarse grained sector-level sweep is performed, followed by a beam-level alignment phase. An exhaustive search over all possible transmission and reception directions is applied in each level.
For a given beamwidth (fixed granularity of searching), <|cite_start|> (Reference: On the Efficient Beam-Forming Training for 60GHz Wireless Personal Area Networks: In this article, we suggest an efficient beam switching technique for the emerging 60GHz wireless personal area networks. Given the pre-specified beam codebooks, the beam switching process, aiming to identify the best beam-pair for data transmissions, is formulated as a global optimization problem in a two-dimension plane that is formed by the potential beam pattern index. As the analytical gradient information of the objective reward function is practically unavailable, Rosenbrock numerical algorithm is properly adopted to implement beam searching, by implicitly approaching and exploiting the gradient descent direction through the numerical pattern-search mechanic. In order to enhance search performance, furthermore, a novel initialization process is presented to provide the feasible initial solution for Rosenbrock search. Inspired by the appealing conception of small-region dividing and conquering, this pre-search algorithm can efficiently reduce the search scope and hence improve the success probability. The developed beam switching technique, i.e. an initialization process followed by Rosenbrock search, exhibits a much lower complexity than the current state-of-the-art strategies. It is demonstrated from both theoretical analysis and numerical experiments that, compared with the existing popular methods, the required protocol overhead of the new beam-training procedure can be significantly reduced, accompanying the power consumption of 60GHz devices.) <|cite_end|> suggests a new search technique as a replacement of the two-stage exhaustive search to reduce the alignment overhead.
Here, we suggest that the alignment-throughput tradeoff should be addressed by optimizing beamwidth per se, thus our work and <|cite_start|> (Reference: On the Efficient Beam-Forming Training for 60GHz Wireless Personal Area Networks: In this article, we suggest an efficient beam switching technique for the emerging 60GHz wireless personal area networks. Given the pre-specified beam codebooks, the beam switching process, aiming to identify the best beam-pair for data transmissions, is formulated as a global optimization problem in a two-dimension plane that is formed by the potential beam pattern index. As the analytical gradient information of the objective reward function is practically unavailable, Rosenbrock numerical algorithm is properly adopted to implement beam searching, by implicitly approaching and exploiting the gradient descent direction through the numerical pattern-search mechanic. In order to enhance search performance, furthermore, a novel initialization process is presented to provide the feasible initial solution for Rosenbrock search. Inspired by the appealing conception of small-region dividing and conquering, this pre-search algorithm can efficiently reduce the search scope and hence improve the success probability. The developed beam switching technique, i.e. an initialization process followed by Rosenbrock search, exhibits a much lower complexity than the current state-of-the-art strategies. It is demonstrated from both theoretical analysis and numerical experiments that, compared with the existing popular methods, the required protocol overhead of the new beam-training procedure can be significantly reduced, accompanying the power consumption of 60GHz devices.) <|cite_end|> are complementary.
The option of activating concurrent transmissions for optimally exploiting the directionality of mmW communications was proposed only recently. The authors of <|cite_start|> (Reference: STDMA-based Scheduling Algorithm for Concurrent Transmissions in Directional Millimeter Wave Networks: In this paper, a concurrent transmission scheduling algorithm is proposed to enhance the resource utilization efficiency for multi-Gbps millimeter-wave (mmWave) networks. Specifically, we exploit spatial-time division multiple access (STDMA) to improve the system throughput by allowing both non-interfering and interfering links to transmit concurrently, considering the high propagation loss at mmWave band and the utilization of directional antenna. Concurrent transmission scheduling in mmWave networks is formulated as an optimization model to maximize the number of flows scheduled in the network such that the quality of service (QoS) requirement of each flow is satisfied. We further decompose the optimization problem and propose a flip-based heuristic scheduling algorithm with low computational complexity to solve the problem. Extensive simulations demonstrate that the proposed algorithm can significantly improve the network performance in terms of network throughput and the number of supported flows.) <|cite_end|> consider the problem of maximizing the number of scheduled flows such that their quality of service requirement is not violated. A greedy scheduling scheme is proposed, where in each time slot an additional link is activated if its contribution to total throughput is positive, that is, throughput gain from this additional link is larger than the interference caused. A similar greedy heuristic is proposed in <|cite_start|> (Reference: FlashLinQ: A synchronous distributed scheduler for peer-to-peer ad hoc networks: This paper proposes FlashLinQ-a synchronous peer-to-peer wireless PHY/MAC network architecture. FlashLinQ leverages the fine-grained parallel channel access offered by OFDM and incorporates an analog energy-level-based signaling scheme that enables signal-to-interference ratio (SIR)-based distributed scheduling. This new signaling mechanism, and the concomitant scheduling algorithm, enables efficient channel-aware spatial resource allocation, leading to significant gains over a CSMA/CA system using RTS/CTS. FlashLinQ is a complete system architecture including: 1) timing and frequency synchronization derived from cellular spectrum; 2) peer discovery; 3) link management; and 4) channel-aware distributed power, data rate, and link scheduling. FlashLinQ has been implemented for operation over licensed spectrum on a digital signal processor/field-programmable gate array (DSP/FPGA) platform. In this paper, we present FlashLinQ performance results derived from both measurements and simulations.) <|cite_end|>, where a priority ordering of links is assumed. Additional links are activated according to this priority order and as long as signal-to-interference-plus-noise ratio (SINR) at all receivers exceeds a threshold. The main issue of all those approaches is that they are reactive protocols, that is, a link has to be activated to deduce if it is compatible with other transmissions. Instead, here we demonstrate that directionality and high attenuation in mmW communications can be exploited to derive accurate scheduling mechanisms.
\subsection{Our Contribution}
The main contributions of this paper are summarized into the following
\begin{itemize}
\item We identify the tradeoffs and the corresponding controls that differentiate mmW communications from other technologies.
\item We provide a unifying optimization-based framework that brings together beam-searching and transmission scheduling and explicitly addresses the major challenges of mmW communications, namely deafness and interference management.
\item We demonstrate how the proposed framework can be translated into protocols that extend the capabilities of existing standards.
\item We evaluate the performance gains arising from the proposed protocols. Our performance analysis provides useful insights on the directionality level and the number of concurrent transmissions that should be supported.
\end{itemize} <|paper_end|> | [
"<|reference_start|> Directional MAC protocol for millimeter wave based wireless personal area networks: Recently, up to 7 GHz license-free spectrum around 60 GHz has been allocated worldwide for high data rate wireless communications. This enables the deployment of WPANs at 60 GHz for short-range multimedia applications up to gigabits per second. In this paper we propose a new scheme to increase the efficiency of MAC layer protocol for WPANs at 60 GHz when directional antennas are used. Our scheme is based on an adaptation of the current IEEE 15.3 standard MAC protocol for WPANs on two aspects. Firstly, we propose a rate-adaptation based scheme to coordinate the directional and omni-directional transmissions in WPANs. Secondly, we propose a novel channel time allocation algorithm, which enables spatial reuse TDMA. The analytical results reveal that our algorithm significantly increases the system capacity. <|reference_end|>",
"<|reference_start|> Millimeter Wave Cellular Wireless Networks: Potentials and Challenges: Millimeter wave (mmW) frequencies between 30 and 300 GHz are a new frontier for cellular communication that offers the promise of orders of magnitude greater bandwidths combined with further gains via beamforming and spatial multiplexing from multi-element antenna arrays. This paper surveys measurements and capacity studies to assess this technology with a focus on small cell deployments in urban environments. The conclusions are extremely encouraging; measurements in New York City at 28 and 73 GHz demonstrate that, even in an urban canyon environment, significant non-line-of-sight (NLOS) outdoor, street-level coverage is possible up to approximately 200 m from a potential low power micro- or picocell base station. In addition, based on statistical channel models from these measurements, it is shown that mmW systems can offer more than an order of magnitude increase in capacity over current state-of-the-art 4G cellular networks at current cell densities. Cellular systems, however, will need to be significantly redesigned to fully achieve these gains. Specifically, the requirement of highly directional and adaptive transmissions, directional isolation between links and significant possibilities of outage have strong implications on multiple access, channel structure, synchronization and receiver design. To address these challenges, the paper discusses how various technologies including adaptive beamforming, multihop relaying, heterogeneous network architectures and carrier aggregation can be leveraged in the mmW context. <|reference_end|>",
"<|reference_start|> On the Efficient Beam-Forming Training for 60GHz Wireless Personal Area Networks: In this article, we suggest an efficient beam switching technique for the emerging 60GHz wireless personal area networks. Given the pre-specified beam codebooks, the beam switching process, aiming to identify the best beam-pair for data transmissions, is formulated as a global optimization problem in a two-dimension plane that is formed by the potential beam pattern index. As the analytical gradient information of the objective reward function is practically unavailable, Rosenbrock numerical algorithm is properly adopted to implement beam searching, by implicitly approaching and exploiting the gradient descent direction through the numerical pattern-search mechanic. In order to enhance search performance, furthermore, a novel initialization process is presented to provide the feasible initial solution for Rosenbrock search. Inspired by the appealing conception of small-region dividing and conquering, this pre-search algorithm can efficiently reduce the search scope and hence improve the success probability. The developed beam switching technique, i.e. an initialization process followed by Rosenbrock search, exhibits a much lower complexity than the current state-of-the-art strategies. It is demonstrated from both theoretical analysis and numerical experiments that, compared with the existing popular methods, the required protocol overhead of the new beam-training procedure can be significantly reduced, accompanying the power consumption of 60GHz devices. <|reference_end|>",
"<|reference_start|> STDMA-based Scheduling Algorithm for Concurrent Transmissions in Directional Millimeter Wave Networks: In this paper, a concurrent transmission scheduling algorithm is proposed to enhance the resource utilization efficiency for multi-Gbps millimeter-wave (mmWave) networks. Specifically, we exploit spatial-time division multiple access (STDMA) to improve the system throughput by allowing both non-interfering and interfering links to transmit concurrently, considering the high propagation loss at mmWave band and the utilization of directional antenna. Concurrent transmission scheduling in mmWave networks is formulated as an optimization model to maximize the number of flows scheduled in the network such that the quality of service (QoS) requirement of each flow is satisfied. We further decompose the optimization problem and propose a flip-based heuristic scheduling algorithm with low computational complexity to solve the problem. Extensive simulations demonstrate that the proposed algorithm can significantly improve the network performance in terms of network throughput and the number of supported flows. <|reference_end|>"
] | [
1,
5,
6,
8
] | {"<|cite_1|>": "arxiv-55097", "<|multi_cite_4_1|>": "ss-1718342", "<|multi_cite_4_2|>": "ss-973356", "<|multi_cite_5_1|>": "ss-806518", "<|multi_cite_5_2|>": "ss-781275", "<|cite_6|>": "arxiv-55097", "<|cite_8|>": "ss-806518", "<|cite_9|>": "ss-806518", "<|cite_10|>": "ss-1012808", "<|cite_11|>": "ss-1373796"} |
2406.04857-0 | <|paper_start|> Title: A Near-Linear Time Approximation Algorithm for Beyond-Worst-Case Graph Clustering
Abstract: A Near-Linear Time Approximation Algorithm for Beyond-Worst-Case Graph Clustering: We consider the semi-random graph model of [Makarychev, Makarychev and Vijayaraghavan, STOC'12], where, given a random bipartite graph with $\alpha$ edges and an unknown bipartition $(A, B)$ of the vertex set, an adversary can add arbitrary edges inside each community and remove arbitrary edges from the cut $(A, B)$ (i.e. all adversarial changes are \textit{monotone} with respect to the bipartition). For this model, a polynomial time algorithm is known to approximate the Balanced Cut problem up to value $O(\alpha)$ [MMV'12] as long as the cut $(A, B)$ has size $\Omega(\alpha)$. However, it consists of slow subroutines requiring optimal solutions for logarithmically many semidefinite programs. We study the fine-grained complexity of the problem and present the first near-linear time algorithm that achieves similar performances to that of [MMV'12]. Our algorithm runs in time $O(|V(G)|^{1+o(1)} + |E(G)|^{1+o(1)})$ and finds a balanced cut of value $O(\alpha)$. Our approach appears easily extendible to related problem, such as Sparsest Cut, and also yields an near-linear time $O(1)$-approximation to Dagupta's objective function for hierarchical clustering [Dasgupta, STOC'16] for the semi-random hierarchical stochastic block model inputs of [Cohen-Addad, Kanade, Mallmann-Trenn, Mathieu, JACM'19].
Introduction
\label{section:introduction}
Graph clustering and partitioning problems are central in combinatorial optimization. Their study has led to a large variety of
key results, leading to new fundamental ideas and impactful practical outcomes.
The sparsest cut and balanced cut problems are iconic examples: On the one hand, they have served as a testbed for designing new breakthrough
algorithmic techniques, from the seminal paper of Leighton and Rao <|cite_start|> (Reference: An approximate max-flow min-cut theorem for uniform multicommodity flow problems with applications to approximation algorithms: A multicommodity flow problem is considered where for each pair of vertices (u, v) it is required to send f half-units of commodity (u, v) from u to v and f half-units of commodity (v, u) from v to u without violating capacity constraints. The main result is an algorithm for performing the task provided that the capacity of each cut exceeds the demand across the cut by a Theta (log n) factor. The condition on cuts is required in the worst case, and is trivially within a Theta (log n) factor of optimal for any flow problem. The result can be used to construct the first polylog-times optimal approximation algorithms for a wide variety of problems, including minimum quotient separators, 1/3-2/3 separators, bifurcators, crossing number, and VLSI layout area. It can also be used to route packets efficiently in arbitrary distributed networks.<<ETX>>) <|cite_end|>up to the results of Arora, Rao, and Vazirani <|cite_start|> (Reference: Expander Flows, Geometric Embeddings and Graph Partitioning: We give a O(&sqrt;log n)-approximation algorithm for the sparsest cut, edge expansion, balanced separator, and graph conductance problems. This improves the O(log n)-approximation of Leighton and Rao (1988). We use a well-known semidefinite relaxation with triangle inequality constraints. Central to our analysis is a geometric theorem about projections of point sets in Rd, whose proof makes essential use of a phenomenon called measure concentration.
We also describe an interesting and natural “approximate certificate” for a graph's expansion, which involves embedding an n-node expander in it with appropriate dilation and congestion. We call this an expander flow.) <|cite_end|>and Sherman <|cite_start|> (Reference: Breaking the Multicommodity Flow Barrier for O(vlog n)-Approximations
to Sparsest Cut: This paper ties the line of work on algorithms that find an O(√log(n))-approximation to the sparsest cut together with the line of work on algorithms that run in sub-quadratic time by using only single-commodity flows. We present an algorithm that simultaneously achieves both goals, finding an O(√log(n)/epsilon)-approximation using O(n^epsilon log^O(1) n) max-flows. The core of the algorithm is a stronger, algorithmic version of Arora et al.'s structure theorem, where we show that matching-chaining argument at the heart of their proof can be viewed as an algorithm that finds good augmenting paths in certain geometric multicommodity flow networks. By using that specialized algorithm in place of a black-box solver, we are able to solve those instances much more efficiently. We also show the cut-matching game framework can not achieve an approximation any better than Omega(log(n)/log log(n)) without re-routing flow.) <|cite_end|>. On the other hand, they are models for graph partitioning problems in various data mining
and unsupervised machine learning applications and have thus inspired widely-used heuristics in more applied fields.
\paragraph*{Beyond worst-case instances}
A frustrating gap exists between the impressive theoretical results obtained over the last three decades and the
success of heuristics used in practice. While poly-logarithmic approximation algorithms have been developed for balanced cut
and sparsest cut (and related problem such as minimum bisection <|cite_start|> (Reference: Optimal Hierarchical Decompositions for Congestion Minimization in Networks: Hierarchical graph decompositions play an important role in the design of approximation and online algorithms for graph problems. This is mainly due to the fact that the results concerning the approximation of metric spaces by tree metrics (e.g. [10,11,14,16]) depend on hierarchical graph decompositions. In this line of work a probability distribution over tree graphs is constructed from a given input graph, in such a way that the tree distances closely resemble the distances in the original graph. This allows it, to solve many problems with a distance-based cost function on trees, and then transfer the tree solution to general undirected graphs with only a logarithmic loss in the performance guarantee. The results about oblivious routing [30,22] in general undirected graphs are based on hierarchical decompositions of a different type in the sense that they are aiming to approximate the bottlenecks in the network (instead of the point-to-point distances). We call such decompositions cut-based decompositions. It has been shown that they also can be used to design approximation and online algorithms for a wide variety of different problems, but at the current state of the art the performance guarantee goes down by an O(log2n log log n)-factor when making the transition from tree networks to general graphs. In this paper we show how to construct cut-based decompositions that only result in a logarithmic loss in performance, which is asymptotically optimal. Remarkably, one major ingredient of our proof is a distance-based decomposition scheme due to Fakcharoenphol, Rao and Talwar [16]. This shows an interesting relationship between these seemingly different decomposition techniques. The main applications of the new decomposition are an optimal O(log n)-competitive algorithm for oblivious routing in general undirected graphs, and an O(log n)-approximation for Minimum Bisection, which improves the O(log1.5n) approximation by Feige and Krauthgamer [17].) <|cite_end|>, multicut <|cite_start|> (Reference: Approximate max-flow min-(multi) cut theorems and their applications: Consider the multicommodity flow problem in which the object is to maximize the sum of commodities routed. We prove the following approximate max-flow min-multicut theorem: $$ \dst \frac{\mbox{\rm min multicut}}{O(\log k)} \leq \mbox{ \rm max flow } \leq \mbox{ \rm min multicut}, $$ \noindent where $k$ is the number of commodities. Our proof is constructive; it enables us to find a multicut within $O(\log k)$ of the max flow (and hence also the optimal multicut). In addition, the proof technique provides a unified framework in which one can also analyse the case of flows with specified demands of Leighton and Rao and Klein et al. and thereby obtain an improved bound for the latter problem.) <|cite_end|>, min uncut <|cite_start|> (Reference: Improved approximation algorithms for Maximum Cut and Satisfiability problems using Semidefinite Programming: We present randomized approximation algorithms for the maximum cut (MAX CUT) and maximum 2-satisfiability (MAX 2SAT) problems that always deliver solutions of expected value at least.87856 times the optimal value. These algorithms use a simple and elegant technique that randomly rounds the solution to a nonlinear programming relaxation. This relaxation can be interpreted both as a semidefinite program and as an eigenvalue minimization problem. The best previously known approximation algorithms for these problems had performance guarantees of 1/2 for MAX CUT and 3/4 or MAX 2SAT. Slight extensions of our analysis lead to a.79607-approximation algorithm for the maximum directed cut problem (MAX DICUT) and a.758-approximation algorithm for MAX SAT, where the best previously known approximation algorithms had performance guarantees of 1/4 and 3/4, respectively. Our algorithm gives the first substantial progress in approximating MAX CUT in nearly twenty years, and represents the first use of semidefinite programming in the design of approximation algorithms.) <|cite_end|> <|cite_start|> (Reference: Matching on the Line Admits no \(o(\sqrt {\log n})\) -Competitive Algorithm: We present a simple proof that no randomized online matching algorithm for the line can be \((\sqrt {\log _2(n+1)}/15)\) -competitive against an oblivious adversary for any n = 2i - 1 : i ∈ ℕ. This is the first super-constant lower bound for the problem, and disproves as a corollary a recent conjecture on the topology-parametrized competitiveness achievable on generic spaces.) <|cite_end|>), the algorithm design
community has had little success in obtaining constant factor approximation algorithms for these problems. In fact, the Unique
Games Conjecture even suggests that such bounds may be very hard to obtain <|cite_start|> (Reference: Optimal inapproximability results for max-cut and other 2-variable CSPs?: In this paper, we give evidence suggesting that MAX-CUT is NP-hard to approximate to within a factor of /spl alpha//sub cw/+ /spl epsi/, for all /spl epsi/ > 0, where /spl alpha//sub cw/ denotes the approximation ratio achieved by the Goemans-Williamson algorithm (1995). /spl alpha//sub cw/ /spl ap/ .878567. This result is conditional, relying on two conjectures: a) the unique games conjecture of Khot; and, b) a very believable conjecture we call the majority is stablest conjecture. These results indicate that the geometric nature of the Goemans-Williamson algorithm might be intrinsic to the MAX-CUT problem. The same two conjectures also imply that it is NP-hard to (/spl beta/ + /spl epsi/)-approximate MAX-2SAT, where /spl beta/ /spl ap/ .943943 is the minimum of (2 + (2//spl pi/) /spl theta/)/(3 - cos(/spl theta/)) on (/spl pi//2, /spl pi/). Motivated by our proof techniques, we show that if the MAX-2CSP and MAX-2SAT problems are slightly restricted - in a way that seems to retain all their hardness -then they have (/spl alpha//sub GW/-/spl epsi/)- and (/spl beta/ - /spl epsi/)-approximation algorithms, respectively. Though we are unable to prove the majority is stablest conjecture, we give some partial results and indicate possible directions of attack. Our partial results are enough to imply that MAX-CUT is hard to (3/4 + 1/(2/spl pi/) + /spl epsi/)-approximate (/spl ap/ .909155), assuming only the unique games conjecture. We also discuss MAX-2CSP problems over non-Boolean domains and state some related results and conjectures. We show, for example, that the unique games conjecture implies that it is hard to approximate MAX-2LIN(q) to within any constant factor.) <|cite_end|> <|cite_start|> (Reference: The unique games conjecture, integrality gap for cut problems and embeddability of negative type metrics into l/sub 1/: In this paper, we disprove the following conjecture due to Goemans (1997) and Linial (2002): "Every negative type metric embeds into with constant distortion." We show that for every /spl delta/ > 0, and for large enough n, there is an n-point negative type metric which requires distortion at-least (log log n) /sup 1/6-/spl delta// to embed into l/sub 1/. Surprisingly, our construction is inspired by the Unique Games Conjecture (UGC) of Khot (2002), establishing a previously unsuspected connection between PCPs and the theory of metric embeddings. We first prove that the UGC implies super-constant hardness results for (non-uniform) sparsest cut and minimum uncut problems. It is already known that the UGC also implies an optimal hardness result for maximum cut (2004). Though these hardness results depend on the UGC, the integrality gap instances rely "only" on the PCP reductions for the respective problems. Towards this, we first construct an integrality gap instance for a natural SDP relaxation of unique games. Then, we "simulate" the PCP reduction and "translate"the integrality gap instance of unique games to integrality gap instances for the respective cut problems! This enables us to prove a (log log n) /sup 1/6-/spl delta// integrality gap for (nonuniform) sparsest cut and minimum uncut, and an optimal integrality gap for maximum cut. All our SDP solutions satisfy the so-called "triangle inequality" constraints. This also shows, for the first time, that the triangle inequality constraints do not add any power to the Goemans-Williamson's SDP relaxation of maximum cut. The integrality gap for sparsest cut immediately implies a lower bound for embedding negative type metrics into l/sub i/. It also disproves the non-uniform version of Arora, Rao and Vazirani's Conjecture (2004), asserting that the integrality gap of the sparsest cut SDP, with the triangle inequality constraints, is bounded from above by a constant.) <|cite_end|> <|cite_start|> (Reference: {Optimal algorithms and inapproximability results for every CSP?: Semidefinite Programming(SDP) is one of the strongest algorithmic techniques used in the design of approximation algorithms. In recent years, Unique Games Conjecture(UGC) has proved to be intimately connected to the limitations of Semidefinite Programming. Making this connection precise, we show the following result : If UGC is true, then for every constraint satisfaction problem(CSP) the best approximation ratio is given by a certain simple SDP. Specifically, we show a generic conversion from SDP integrality gaps to UGC hardness results for every CSP. This result holds both for maximization and minimization problems over arbitrary finite domains. Using this connection between integrality gaps and hardness results we obtain a generic polynomial-time algorithm for all CSPs. Assuming the Unique Games Conjecture, this algorithm achieves the optimal approximation ratio for every CSP. Unconditionally, for all 2-CSPs the algorithm achieves an approximation ratio equal to the integrality gap of a natural SDP used in literature. Further the algorithm achieves at least as good an approximation ratio as the best known algorithms for several problems like MaxCut, Max2Sat, MaxDiCut and Unique Games.) <|cite_end|> <|cite_start|> (Reference: Reductions Between Expansion Problems: The Small-Set Expansion Hypothesis (Raghavendra, Steurer, STOC 2010) is a natural hardness assumption concerning the problem of approximating the edge expansion of small sets in graphs. This hardness assumption is closely connected to the Unique Games Conjecture (Khot, STOC 2002). In particular, the Small-Set Expansion Hypothesis implies the Unique Games Conjecture (Raghavendra, Steurer, STOC 2010). Our main result is that the Small-Set Expansion Hypothesis is in fact equivalent to a variant of the Unique Games Conjecture. More precisely, the hypothesis is equivalent to the Unique Games Conjecture restricted to instance with a fairly mild condition on the expansion of small sets. Alongside, we obtain the first strong hardness of approximation results for the Balanced Separator and Minimum Linear Arrangement problems. Before, no such hardness was known for these problems even assuming the Unique Games Conjecture. These results not only establish the Small-Set Expansion Hypothesis as a natural unifying hypothesis that implies the Unique Games Conjecture, all its consequences and, in addition, hardness results for other problems like Balanced Separator and Minimum Linear Arrangement, but our results also show that the Small-Set Expansion Hypothesis problem lies at the combinatorial heart of the Unique Games Conjecture. The key technical ingredient is a new way of exploiting the structure of the Unique Games instances obtained from the Small-Set Expansion Hypothesis via (Raghavendra, Steurer, 2010). This additional structure allows us to modify standard reductions in a way that essentially destroys their local-gadget nature. Using this modification, we can argue about the expansion in the graphs produced by the reduction without relying on expansion properties of the underlying Unique Games instance (which would be impossible for a local-gadget reduction).) <|cite_end|>. Thus, to be able to show good approximation bounds
and design algorithms that are tailored to real-world instances, one must shift the focus from the worst-case to the
so called \emph{beyond-worst-case} complexity of the problems.
This conclusion has seeded a long line of work aimed at modeling
average instances encountered in practice and designing algorithms for these models <|cite_start|> (Reference: Fast solution of some random NP-hard problems: ) <|cite_end|> <|cite_start|> (Reference: Graph bisection algorithms with good average case behavior: ) <|cite_end|> <|cite_start|> (Reference: Eigenvalues and graph bisection: An average-case analysis: Graph Bisection is the problem of partitioning the vertices of a graph into two equal-size pieces so as to minimize the number of edges between the two pieces. This paper presents an algorithm that will, for almost all graphs in a certain class, output the minimum-size bisection. Furthermore the algorithm will yield, for almost all such graphs, a proof that the bisection is optimal. The algorithm is based on computing eigenvalues and eigenvectors of matrices associated with the graph.) <|cite_end|> <|cite_start|> (Reference: Heuristics for Semirandom Graph Problems: We consider semirandom graph models for finding large independent sets, colorings, and bisections in graphs. These models generate problem instances by blending random and adversarial decisions. To generate semirandom independent set problems, an independent set S of ?n vertices is randomly chosen. Each edge connecting S with S is chosen with probability p, and an adversary is then allowed to add new edges arbitrarily, provided that S remains an independent set. The smaller p is, the greater the control the adversary has over the semirandom graph. We give a heuristic that with high probability recovers an independent set of size ?n whenever p> (1+?)lnn/?n, for any constant ?>0. We show that when p<(1??)lnn /?n, an independent set of size |S| cannot be recovered, unless NP?BPP. We use our result for maximum independent sets to obtain greatly improved heuristics for the model of k-colorable semirandom graphs introduced by Blum and Spencer. For constant k, our results are optimal up to constant factors in the edge probabilities. In the semirandom model for graph bisection, a random bisection (S, S) of the vertices is chosen. Each edge (u, v)?S×S is independently chosen with probability q and each edge (u, v)?S×S is independently chosen with probability pq. The adversary may then arbitrarily remove edges in S×S and add edges not in S×S. Extending the work of Boppana, we give a heuristic that recovers this bisection with high probability when p?q?cplogn/n, for c a sufficiently large constant.) <|cite_end|> <|cite_start|> (Reference: Spectral partitioning of random graphs: Problems such as bisection, graph coloring, and clique are generally believed hard in the worst case. However, they can be solved if the input data is drawn randomly from a distribution over graphs containing acceptable solutions. In this paper we show that a simple spectral algorithm can solve all three problems above in the average case, as well as a more general problem of partitioning graphs based on edge density. In nearly all cases our approach meets or exceeds previous parameters, while introducing substantial generality. We apply spectral techniques, using foremost the observation that in all of these problems, the expected adjacency matrix is a low rank matrix wherein the structure of the solution is evident.) <|cite_end|>(or analyzing existing algorithms in
these models <|cite_start|> (Reference: Simulated annealing for graph bisection: We resolve in the affirmative a question of R.B. Boppana and T. Bui: whether simulated annealing can with high probability and in polynomial time, find the optimal bisection of a random graph an G/sub npr/ when p-r=(/spl Theta/n/sup /spl Delta/-2/) for /spl Delta//spl les/2. (The random graph model G/sub npr/ specifies a "planted" bisection of density r, separating two n/2-vertex subsets of slightly higher density p.) We show that simulated "annealing" at an appropriate fixed temperature (i.e., the Metropolis algorithm) finds the unique smallest bisection in O(n/sup 2+/spl epsi//) steps with very high probability, provided /spl Delta/>11/6. (By using a slightly modified neighborhood structure, the number of steps can be reduced to O(n/sup 1+/spl epsi//).) We leave open the question of whether annealing is effective for /spl Delta/ in the range 3/2</spl les/11/6, whose lower limit represents the threshold at which the planted bisection becomes lost amongst other random small bisections. It also remains open whether hillclimbing (i.e. annealing at temperature 0) solves the same problem.<<ETX>>) <|cite_end|> <|cite_start|> (Reference: Go with the Winners for Graph Bisection.: We analyze “Go with the winners” for graph bisection. We introduce a weaker version of expansion called ‘<local expansion”. We show that “Go with the winners” works well in any search space whose sub-graphs with solutions at least as good as a certain threshold have local expansion, and where these sub-graphs do not shrink more than by a polynomial factor when the threshold is incremented. We give a general technique for showing that solution spaces for random instances of problems have local expansion. We apply this technique to the minimum bisection problem for random graphs. We conclude that “Go with the winners” approximates the best solution in random graphs of certain densities with planted bisections in polynomial time and finds the optimal solution in quasi-polynomial time. Although other methods also solve this problem for the same densities, the set of tools we develop may be useful in the analysis of similar problems. In particular, our results easily extend to hypergraph bisection, whereas it is not clear whether the other known techniques do.) <|cite_end|> <|cite_start|> (Reference: Are stable instances easy?: We introduce the notion of a stable instance for a discrete optimization problem, and argue that in many practical situations only sufficiently stable instances are of interest. The question then arises whether stable instances of NP--hard problems are easier to solve. In particular, whether there exist algorithms that solve correctly and in polynomial time all sufficiently stable instances of some NP--hard problem. The paper focuses on the Max--Cut problem, for which we show that this is indeed the case.) <|cite_end|>).
For the model to be relevant it should forbid pathological instances that are
extremely unlikely in practice while capturing the essence of the real-world instances without oversimplifying them.
While there has been a significant amount of work on inference in random and semi-random graph models,
the work of Makarychev, Makarychev and Vijayaraghavan <|cite_start|> (Reference: Approximation Algorithms for Semi-random Partitioning Problems: In this paper, we propose and study a new semi-random model for graph partitioning problems. We believe that it captures many properties of real-world instances. The model is more flexible than the semi-random model of Feige and Kilian and planted random model of Bui, Chaudhuri, Leighton and Sipser.
We develop a general framework for solving semi-random instances and apply it to several problems of interest. We present constant factor bi-criteria approximation algorithms for semi-random instances of the Balanced Cut, Multicut, Min Uncut, Sparsest Cut and Small Set Expansion problems. We also show how to almost recover the optimal solution if the instance satisfies an additional expanding condition. Our algorithms work in a wider range of parameters than most algorithms for previously studied random and semi-random models.
Additionally, we study a new planted algebraic expander model and develop constant factor bi-criteria approximation algorithms for graph partitioning problems in this model.) <|cite_end|>is among the first to analyze the
approximability and complexity of the graph partitioning objectives mentioned above for extremely general families of semi-random graphs.
In their settings, the input is generated from a distribution over graphs that exhibit a cluster structure. Concretely, the graph consists of
two communities and a planted random cut between the communities, the adversary can modify the graph in agreement with the cluster structure
by arbitrarily adding edges within the communites and / or sparsifying the random cut across communities,\footnote{These are often times referred to as \textit{monotone} perturbations. Such perturbations may have surprising effects on the statistical and computational aspects of the problem. For instance see <|cite_start|> (Reference: How Robust are Reconstruction Thresholds for Community Detection?: The stochastic block model is one of the oldest and most ubiquitous models for studying clustering and community detection. In an exciting sequence of developments, motivated by deep but non-rigorous ideas from statistical physics, Decelle et al. conjectured a sharp threshold for when community detection is possible in the sparse regime. Mossel, Neeman and Sly and Massoulie proved the conjecture and gave matching algorithms and lower bounds. Here we revisit the stochastic block model from the perspective of semirandom models where we allow an adversary to make `helpful' changes that strengthen ties within each community and break ties between them. We show a surprising result that these `helpful' changes can shift the information-theoretic threshold, making the community detection problem strictly harder. We complement this by showing that an algorithm based on semidefinite programming (which was known to get close to the threshold) continues to work in the semirandom model (even for partial recovery). This suggests that algorithms based on semidefinite programming are robust in ways that any algorithm meeting the information-theoretic threshold cannot be. These results point to an interesting new direction: Can we find robust, semirandom analogues to some of the classical, average-case thresholds in statistics? We also explore this question in the broadcast tree model, and we show that the viewpoint of semirandom models can help explain why some algorithms are preferred to others in practice, in spite of the gaps in their statistical performance on random models.) <|cite_end|> <|cite_start|> (Reference: Minimax Rates for Robust Community Detection: In this work, we study the problem of community detection in the stochastic block model with adversarial node corruptions. Our main result is an efficient algorithm that can tolerate an $\epsilon$-fraction of corruptions and achieves error $O(\epsilon) + e^{-\frac{C}{2} (1 \pm o(1))}$ where $C = (\sqrt{a} - \sqrt{b})^2$ is the signal-to-noise ratio and $a/n$ and $b/n$ are the inter-community and intra-community connection probabilities respectively. These bounds essentially match the minimax rates for the SBM without corruptions. We also give robust algorithms for $\mathbb{Z}_2$-synchronization. At the heart of our algorithm is a new semidefinite program that uses global information to robustly boost the accuracy of a rough clustering. Moreover, we show that our algorithms are doubly-robust in the sense that they work in an even more challenging noise model that mixes adversarial corruptions with unbounded monotone changes, from the semi-random model.) <|cite_end|>.} see \cref{model:main} for a precise definition.
In this context, the goal is not to recover the underlying cluster structure -- which may be information-theoretically impossible --
but rather to provide a good approximation to the cut objectives.
The motivation for studying such models is the following. In practice, the graphs we aim at clustering have an
unknown underlying cluster structure that we would like to identify -- and that's why we are running a clustering algorithm in
the first place. In this context, on the one hand the intra-cluster topology may be very peculiar and so possibly adversarial (hence we
would like to let the adversary freely choose
the intra-cluster topology\footnote{We remark this model is significantly more general than the stochastic block model, see \cref{section:related-research}}), on the other hand the inter-cluster topology is often more random, sometimes interpreted as noise between clusters and hence modeled as a random cut,
see also the discussion and motivating examples provided in <|cite_start|> (Reference: Approximation Algorithms for Semi-random Partitioning Problems: In this paper, we propose and study a new semi-random model for graph partitioning problems. We believe that it captures many properties of real-world instances. The model is more flexible than the semi-random model of Feige and Kilian and planted random model of Bui, Chaudhuri, Leighton and Sipser.
We develop a general framework for solving semi-random instances and apply it to several problems of interest. We present constant factor bi-criteria approximation algorithms for semi-random instances of the Balanced Cut, Multicut, Min Uncut, Sparsest Cut and Small Set Expansion problems. We also show how to almost recover the optimal solution if the instance satisfies an additional expanding condition. Our algorithms work in a wider range of parameters than most algorithms for previously studied random and semi-random models.
Additionally, we study a new planted algebraic expander model and develop constant factor bi-criteria approximation algorithms for graph partitioning problems in this model.) <|cite_end|>.
Of course, allowing the adversary to make the
planted cut denser -- and by doing so to smooth out the underlying cluster structure --
would bring us back to the worst-case setting; the semi-random model proposed above is thus a step in between.
Hence, with the idea of bridging the gap between worst-case complexity of the problems and real-world instances,
Makarychev, Makarychev, Vijayaraghavan <|cite_start|> (Reference: Approximation Algorithms for Semi-random Partitioning Problems: In this paper, we propose and study a new semi-random model for graph partitioning problems. We believe that it captures many properties of real-world instances. The model is more flexible than the semi-random model of Feige and Kilian and planted random model of Bui, Chaudhuri, Leighton and Sipser.
We develop a general framework for solving semi-random instances and apply it to several problems of interest. We present constant factor bi-criteria approximation algorithms for semi-random instances of the Balanced Cut, Multicut, Min Uncut, Sparsest Cut and Small Set Expansion problems. We also show how to almost recover the optimal solution if the instance satisfies an additional expanding condition. Our algorithms work in a wider range of parameters than most algorithms for previously studied random and semi-random models.
Additionally, we study a new planted algebraic expander model and develop constant factor bi-criteria approximation algorithms for graph partitioning problems in this model.) <|cite_end|>developed a general algorithmic framework for
graph partitioning problems in the above semi-random instances which
achieves an $O(1)$-approximation algorithm (for a wide array of parameters) for balanced cut and sparsest cut and related problems
such as multicut, min uncut and small set expansion.
While the result of <|cite_start|> (Reference: Approximation Algorithms for Semi-random Partitioning Problems: In this paper, we propose and study a new semi-random model for graph partitioning problems. We believe that it captures many properties of real-world instances. The model is more flexible than the semi-random model of Feige and Kilian and planted random model of Bui, Chaudhuri, Leighton and Sipser.
We develop a general framework for solving semi-random instances and apply it to several problems of interest. We present constant factor bi-criteria approximation algorithms for semi-random instances of the Balanced Cut, Multicut, Min Uncut, Sparsest Cut and Small Set Expansion problems. We also show how to almost recover the optimal solution if the instance satisfies an additional expanding condition. Our algorithms work in a wider range of parameters than most algorithms for previously studied random and semi-random models.
Additionally, we study a new planted algebraic expander model and develop constant factor bi-criteria approximation algorithms for graph partitioning problems in this model.) <|cite_end|>is close to optimal in the sense that it achieves an $O(1)$-approximation
for several classic graph partitioning problems and a wide range of parameters, it relies on an heavy machinery that requires to iteratively solve multiple semi-definite programs.
In fact the running time is not stated in the paper and seem to require $\Omega(n^3)$ time for the rounding on top of the time it takes to obtain optimal solutions to
polylogarithmically many semi-definite programs with more than $\Omega(n^3)$ constraints.\footnote{We point out that the algorithm \textit{requires} an \textit{actual} feasible solution with nearly optimal objective value and not a rounded solution.}
We initiate the study of the \emph{fine-grained} complexity of the problem and ask: \textit{How fast can we solve beyond-worst-case
instances (involving semi-random perturbations)?}
\subsection{Results}
Before providing our main theorem, we introduce the model of interest.
\begin{model}[Random cut with monotone perturbations]\label{model:main}
We consider graphs over $n$ vertices generated through the following process. Let $a \in (0,1/2)$, $\eta(n)\in(0,1)$:
\begin{enumerate}
\item[(i)] The adversary partitions $[n]$ into sets $A, B$ satisfying $\card{A}, \card{B}\geq an$.
\item[(ii)] Each edge between $A$ and $B$ is drawn randomly and independently with probability $\eta$.
\item[(iii)] The adversary arbitrarily adds edges within $A$ and within $B$.
\item[(iv)] The adversary arbitrarily removes edges between $A$ and $B$.
\end{enumerate}
\end{model}
Our main result is an algorithm that, given an instance of \cref{model:main} with a $\Omega(n^2\cdot \eta)$-sized $(A,B)$ cut, returns a $O(1)$-approximation in almost linear time.\footnote{We write $o(1)$ to denote real-valued functions tending to zero as $n$ grows.}
\begin{theorem}\label{theorem:main}
Let $G$ be a graph over $n$ vertices generated through \cref{model:main} with parameters $a>0,\eta\geq \Omega(\frac{(\log n)^2 \cdot (\log\log n)^2}{n})\,.$
There exists an algorithm that on input $G$, with probability $1-o(1)$, outputs an $\Omega(a)$-balanced cut of value at most $O(n^2\cdot \eta)$, namely a cut where each side has size at least $\Omega(a \cdot n)$.
Moreover, the algorithm runs in time $O\Paren{\Card{V(G)}^{1+o(1)}+\Card{E(G)}^{1+o(1)}}$.
\end{theorem}
\cref{theorem:main} is a significant step toward bridging the gap between the theoretically-oriented work of <|cite_start|> (Reference: Approximation Algorithms for Semi-random Partitioning Problems: In this paper, we propose and study a new semi-random model for graph partitioning problems. We believe that it captures many properties of real-world instances. The model is more flexible than the semi-random model of Feige and Kilian and planted random model of Bui, Chaudhuri, Leighton and Sipser.
We develop a general framework for solving semi-random instances and apply it to several problems of interest. We present constant factor bi-criteria approximation algorithms for semi-random instances of the Balanced Cut, Multicut, Min Uncut, Sparsest Cut and Small Set Expansion problems. We also show how to almost recover the optimal solution if the instance satisfies an additional expanding condition. Our algorithms work in a wider range of parameters than most algorithms for previously studied random and semi-random models.
Additionally, we study a new planted algebraic expander model and develop constant factor bi-criteria approximation algorithms for graph partitioning problems in this model.) <|cite_end|>and the practical motivation behind semi-random models.
The error guarantees of the underlying algorithm match those of <|cite_start|> (Reference: Approximation Algorithms for Semi-random Partitioning Problems: In this paper, we propose and study a new semi-random model for graph partitioning problems. We believe that it captures many properties of real-world instances. The model is more flexible than the semi-random model of Feige and Kilian and planted random model of Bui, Chaudhuri, Leighton and Sipser.
We develop a general framework for solving semi-random instances and apply it to several problems of interest. We present constant factor bi-criteria approximation algorithms for semi-random instances of the Balanced Cut, Multicut, Min Uncut, Sparsest Cut and Small Set Expansion problems. We also show how to almost recover the optimal solution if the instance satisfies an additional expanding condition. Our algorithms work in a wider range of parameters than most algorithms for previously studied random and semi-random models.
Additionally, we study a new planted algebraic expander model and develop constant factor bi-criteria approximation algorithms for graph partitioning problems in this model.) <|cite_end|>, but the running time is nearly linear.
Despite the fact that further steps remains to be taken to provide algorithmic solutions that both matches the theoretical guarantees of <|cite_start|> (Reference: Approximation Algorithms for Semi-random Partitioning Problems: In this paper, we propose and study a new semi-random model for graph partitioning problems. We believe that it captures many properties of real-world instances. The model is more flexible than the semi-random model of Feige and Kilian and planted random model of Bui, Chaudhuri, Leighton and Sipser.
We develop a general framework for solving semi-random instances and apply it to several problems of interest. We present constant factor bi-criteria approximation algorithms for semi-random instances of the Balanced Cut, Multicut, Min Uncut, Sparsest Cut and Small Set Expansion problems. We also show how to almost recover the optimal solution if the instance satisfies an additional expanding condition. Our algorithms work in a wider range of parameters than most algorithms for previously studied random and semi-random models.
Additionally, we study a new planted algebraic expander model and develop constant factor bi-criteria approximation algorithms for graph partitioning problems in this model.) <|cite_end|>and whose running time is competitive with state-of-the-art Bisection heuristics, our algorithm is a \textit{first} example that general beyond-worst-case graph clustering can be done in near linear time.
Finally, we believe that understanding the fine-grained complexity of balanced cut and related problems beyond-the-worst case is an important line of work and the techniques presented here could lead to further improvements for other related problems for which the beyond-worst-case analysis has been studied (e.g.: Bilu-Linial stability for multicut <|cite_start|> (Reference: Are stable instances easy?: We introduce the notion of a stable instance for a discrete optimization problem, and argue that in many practical situations only sufficiently stable instances are of interest. The question then arises whether stable instances of NP--hard problems are easier to solve. In particular, whether there exist algorithms that solve correctly and in polynomial time all sufficiently stable instances of some NP--hard problem. The paper focuses on the Max--Cut problem, for which we show that this is indeed the case.) <|cite_end|> <|cite_start|> (Reference: Algorithms for stable and perturbation-resilient problems: We study the notion of stability and perturbation resilience introduced by Bilu and Linial (2010) and Awasthi, Blum, and Sheffet (2012). A combinatorial optimization problem is α-stable or α-perturbation-resilient if the optimal solution does not change when we perturb all parameters of the problem by a factor of at most α. In this paper, we give improved algorithms for stable instances of various clustering and combinatorial optimization problems. We also prove several hardness results. We first give an exact algorithm for 2-perturbation resilient instances of clustering problems with natural center-based objectives. The class of clustering problems with natural center-based objectives includes such problems as k-means, k-median, and k-center. Our result improves upon the result of Balcan and Liang (2016), who gave an algorithm for clustering 1+√≈2.41 perturbation-resilient instances. Our result is tight in the sense that no polynomial-time algorithm can solve (2ε)-perturbation resilient instances of k-center unless NP = RP, as was shown by Balcan, Haghtalab, and White (2016). We then give an exact algorithm for (2-2/k)-stable instances of Minimum Multiway Cut with k terminals, improving the previous result of Makarychev, Makarychev, and Vijayaraghavan (2014), who gave an algorithm for 4-stable instances. We also give an algorithm for (2-2/k+ς)-weakly stable instances of Minimum Multiway Cut. Finally, we show that there are no robust polynomial-time algorithms for n1-ε-stable instances of Set Cover, Minimum Vertex Cover, and Min 2-Horn Deletion (unless P = NP).) <|cite_end|>).
\paragraph*{Generalizations}
Our approach appears to also be easily extendable to other graph problems. As a concrete example, we consider the \textit{semi-random hierarchical stochastic block model} (henceforth HSM) of <|cite_start|> (Reference: Hierarchical Clustering: Objective Functions and Algorithms: Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a `good' hierarchical clustering is one that minimizes some cost function. He showed that this cost function has certain desirable properties. We take an axiomatic approach to defining `good' objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of "admissible" objective functions (that includes Dasgupta's one) that have the property that when the input admits a `natural' hierarchical clustering, it has an optimal value. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better algorithms. For similarity-based hierarchical clustering, Dasgupta showed that the divisive sparsest-cut approach achieves an $O(\log^{3/2} n)$-approximation. We give a refined analysis of the algorithm and show that it in fact achieves an $O(\sqrt{\log n})$-approx. (Charikar and Chatziafratis independently proved that it is a $O(\sqrt{\log n})$-approx.). This improves upon the LP-based $O(\log n)$-approx. of Roy and Pokutta. For dissimilarity-based hierarchical clustering, we show that the classic average-linkage algorithm gives a factor 2 approx., and provide a simple and better algorithm that gives a factor 3/2 approx.. Finally, we consider `beyond-worst-case' scenario through a generalisation of the stochastic block model for hierarchical clustering. We show that Dasgupta's cost function has desirable properties for these inputs and we provide a simple 1 + o(1)-approximation in this setting.) <|cite_end|>. In <|cite_start|> (Reference: Hierarchical Clustering: Objective Functions and Algorithms: Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a `good' hierarchical clustering is one that minimizes some cost function. He showed that this cost function has certain desirable properties. We take an axiomatic approach to defining `good' objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of "admissible" objective functions (that includes Dasgupta's one) that have the property that when the input admits a `natural' hierarchical clustering, it has an optimal value. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better algorithms. For similarity-based hierarchical clustering, Dasgupta showed that the divisive sparsest-cut approach achieves an $O(\log^{3/2} n)$-approximation. We give a refined analysis of the algorithm and show that it in fact achieves an $O(\sqrt{\log n})$-approx. (Charikar and Chatziafratis independently proved that it is a $O(\sqrt{\log n})$-approx.). This improves upon the LP-based $O(\log n)$-approx. of Roy and Pokutta. For dissimilarity-based hierarchical clustering, we show that the classic average-linkage algorithm gives a factor 2 approx., and provide a simple and better algorithm that gives a factor 3/2 approx.. Finally, we consider `beyond-worst-case' scenario through a generalisation of the stochastic block model for hierarchical clustering. We show that Dasgupta's cost function has desirable properties for these inputs and we provide a simple 1 + o(1)-approximation in this setting.) <|cite_end|>, the authors studied the celebrated objective
function for hierarchical clustering introduced by Dasgupta <|cite_start|> (Reference: A cost function for similarity-based hierarchical clustering: The development of algorithms for hierarchical clustering has been hampered by a shortage of precise objective functions. To help address this situation, we introduce a simple cost function on hierarchies over a set of points, given pairwise similarities between those points. We show that this criterion behaves sensibly in canonical instances and that it admits a top-down construction procedure with a provably good approximation ratio.) <|cite_end|>and investigate how well it can be approximated
beyond-the-worst-case. Assuming the Small Set Expansion hypothesis <|cite_start|> (Reference: Graph Expansion and the Unique Games Conjecture: The edge expansion of a subset of vertices S ⊆ V in a graph G measures the fraction of edges that leave S. In a d-regular graph, the edge expansion/conductance Φ(S) of a subset S ⊆ V is defined as Φ(S) = (|E(S, V\S)|)/(d|S|). Approximating the conductance of small linear sized sets (size δ n) is a natural optimization question that is a variant of the well-studied Sparsest Cut problem. However, there are no known algorithms to even distinguish between almost complete edge expansion (Φ(S) = 1-ε), and close to 0 expansion. In this work, we investigate the connection between Graph Expansion and the Unique Games Conjecture. Specifically, we show the following: We show that a simple decision version of the problem of approximating small set expansion reduces to Unique Games. Thus if approximating edge expansion of small sets is hard, then Unique Games is hard. Alternatively, a refutation of the UGC will yield better algorithms to approximate edge expansion in graphs. This is the first non-trivial "reverse" reduction from a natural optimization problem to Unique Games. Under a slightly stronger UGC that assumes mild expansion of small sets, we show that it is UG-hard to approximate small set expansion. On instances with sufficiently good expansion of small sets, we show that Unique Games is easy by extending the techniques of [4].) <|cite_end|>, the problem cannot be approximated within any constant factor. The authors thus introduce a generative model for hierarchical clustering inputs called the \emph{hierarchical stochastic block model} that naturally generalizes the classic stochastic block model, and show that one can approximate Dasgupta's objective
up to a constant factor in that model and under semi-random perturbation (the precise definition of the model can be found in
\cref{section:semi-random-hsm}). In this paper, we significantly improve the complexity of the algorithm of <|cite_start|> (Reference: Hierarchical Clustering: Objective Functions and Algorithms: Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a `good' hierarchical clustering is one that minimizes some cost function. He showed that this cost function has certain desirable properties. We take an axiomatic approach to defining `good' objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of "admissible" objective functions (that includes Dasgupta's one) that have the property that when the input admits a `natural' hierarchical clustering, it has an optimal value. Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better algorithms. For similarity-based hierarchical clustering, Dasgupta showed that the divisive sparsest-cut approach achieves an $O(\log^{3/2} n)$-approximation. We give a refined analysis of the algorithm and show that it in fact achieves an $O(\sqrt{\log n})$-approx. (Charikar and Chatziafratis independently proved that it is a $O(\sqrt{\log n})$-approx.). This improves upon the LP-based $O(\log n)$-approx. of Roy and Pokutta. For dissimilarity-based hierarchical clustering, we show that the classic average-linkage algorithm gives a factor 2 approx., and provide a simple and better algorithm that gives a factor 3/2 approx.. Finally, we consider `beyond-worst-case' scenario through a generalisation of the stochastic block model for hierarchical clustering. We show that Dasgupta's cost function has desirable properties for these inputs and we provide a simple 1 + o(1)-approximation in this setting.) <|cite_end|>.
\begin{theorem}
\label{theo61}
Let $G$ be a graph generated from the HSM (Definition \ref{Definition51}) with $p_{min}= \Omega\left(\log n / n^{2/3} \right)$. Then, there exists a randomized algorithm that runs in time $O\Paren{\Card{V(G)}^{1+o(1)}+\Card{E(G)}^{1+o(1)}}$ with probability $1-o(1)$ outputs a tree $T$ such that
\begin{equation}
\label{eq:eq14}
cost(T;G) = O(OPT(\bar{G})),
\end{equation}
where $OPT(\bar{G})$ denotes the value of the optimal tree for $\bar{G}$ and we note that $OPT(\bar{G})=cost(\widetilde{T};\bar{G})$, where $\widetilde{T}$ is the generating tree. Furthermore, the above holds even in the semi-random case, i.e., when an adversary is allowed to remove any subset of the edges from $G$.
\end{theorem}
\subsection{Related Research}\label{section:related-research}
There has been extensive research on graph partitioning problems for random and semi-random models.
Perhaps the most extensively studied example is the stochastic block model (see <|cite_start|> (Reference: {Community Detection and Stochastic Block Models: Recent Developments: The stochastic block model (SBM) is a random graph model with planted clusters. It is widely employed as a canonical model to study clustering and community detection, and provides generally a fertile ground to study the statistical and computational tradeoffs that arise in network and data sciences.
This note surveys the recent developments that establish the fundamental limits for community detection in the SBM, both with respect to information-theoretic and computational thresholds, and for various recovery requirements such as exact, partial and weak recovery (a.k.a., detection). The main results discussed are the phase transitions for exact recovery at the Chernoff-Hellinger threshold, the phase transition for weak recovery at the Kesten-Stigum threshold, the optimal distortion-SNR tradeoff for partial recovery, the learning of the SBM parameters and the gap between information-theoretic and computational thresholds.
The note also covers some of the algorithms developed in the quest of achieving the limits, in particular two-round algorithms via graph-splitting, semi-definite programming, linearized belief propagation, classical and nonbacktracking spectral methods. A few open problems are also discussed.) <|cite_end|>for a broad overview). In its simplest form, the model describes graphs where both the inter-community and the intra-community topologies are random. That is the graph is randomly partitioned into two subsets $(A, B)$ of the same size such that every edge between the set $A$ and set $B$ exists with probability $\eta$, and edges inside communities $A$, and $B$ exists with probability $\mu\geq \eta$.\footnote{We remark that from both a computational and a statistical point of view, sharp phase transitions appear depending on the relation between the expected average degree and the community bias. We omit a detailed discussion and refer the interested reader to the aforementioned survey.}
Many algorithms are known to succesfully recover the partition for typical instances of the model <|cite_start|> (Reference: Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications: In this paper we extend our previous work on the stochastic block model, a commonly used generative model for social and biological networks, and the problem of inferring functional groups or communities from the topology of the network. We use the cavity method of statistical physics to obtain an asymptotically exact analysis of the phase diagram. We describe in detail properties of the detectability/undetectability phase transition and the easy/hard phase transition for the community detection problem. Our analysis translates naturally into a belief propagation algorithm for inferring the group memberships of the nodes in an optimal way, i.e., that maximizes the overlap with the underlying group memberships, and learning the underlying parameters of the block model. Finally, we apply the algorithm to two examples of real-world networks and discuss its performance.) <|cite_end|> <|cite_start|> (Reference: Community detection thresholds and the weak Ramanujan property: Decelle et al.\cite{Decelle11} conjectured the existence of a sharp threshold for community detection in sparse random graphs drawn from the stochastic block model. Mossel et al.\cite{Mossel12} established the negative part of the conjecture, proving impossibility of meaningful detection below the threshold. However the positive part of the conjecture remained elusive so far. Here we solve the positive part of the conjecture. We introduce a modified adjacency matrix $B$ that counts self-avoiding paths of a given length $\ell$ between pairs of nodes and prove that for logarithmic $\ell$, the leading eigenvectors of this modified matrix provide non-trivial detection, thereby settling the conjecture. A key step in the proof consists in establishing a {\em weak Ramanujan property} of matrix $B$. Namely, the spectrum of $B$ consists in two leading eigenvalues $\rho(B)$, $\lambda_2$ and $n-2$ eigenvalues of a lower order $O(n^{\epsilon}\sqrt{\rho(B)})$ for all $\epsilon>0$, $\rho(B)$ denoting $B$'s spectral radius. $d$-regular graphs are Ramanujan when their second eigenvalue verifies $|\lambda|\le 2 \sqrt{d-1}$. Random $d$-regular graphs have a second largest eigenvalue $\lambda$ of $2\sqrt{d-1}+o(1)$ (see Friedman\cite{friedman08}), thus being {\em almost} Ramanujan. Erd\H{o}s-R\'enyi graphs with average degree $d$ at least logarithmic ($d=\Omega(\log n)$) have a second eigenvalue of $O(\sqrt{d})$ (see Feige and Ofek\cite{Feige05}), a slightly weaker version of the Ramanujan property. However this spectrum separation property fails for sparse ($d=O(1)$) Erd\H{o}s-R\'enyi graphs. Our result thus shows that by constructing matrix $B$ through neighborhood expansion, we regularize the original adjacency matrix to eventually recover a weak form of the Ramanujan property.) <|cite_end|> <|cite_start|> (Reference: Reconstruction and estimation in the planted partition model: ) <|cite_end|> <|cite_start|> (Reference: Efficient bayesian estimation from few samples: Community detection and related problems: We propose an efficient meta-algorithm for Bayesian inference problems based on low-degree polynomials, semidefinite programming, and tensor decomposition. The algorithm is inspired by recent lower bound constructions for sum-of-squares and related to the method of moments. Our focus is on sample complexity bounds that are as tight as possible (up to additive lower-order terms) and often achieve statistical thresholds or conjectured computational thresholds.Our algorithm recovers the best known bounds for partial recovery in the stochastic block model, a widely-studied class of inference problems for community detection in graphs. We obtain the first partial recovery guarantees for the mixed-membership stochastic block model (Airoldi et el.) for constant average degree—up to what we conjecture to be the computational threshold for this model. %Our algorithm also captures smooth trade-offs between sample and computational complexity, for example, for tensor principal component analysis. We show that our algorithm exhibits a sharp computational threshold for the stochastic block model with multiple communities beyond the Kesten–Stigum bound—giving evidence that this task may require exponential time.The basic strategy of our algorithm is strikingly simple: we compute the best-possible low-degree approximation for the moments of the posterior distribution of the parameters and use a robust tensor decomposition algorithm to recover the parameters from these approximate posterior moments.) <|cite_end|> <|cite_start|> (Reference: A Proof Of The Block Model Threshold Conjecture: We study a random graph model named the "block model" in statistics and the "planted partition model" in theoretical computer science. In its simplest form, this is a random graph with two equal-sized clusters, with a between-class edge probability of $q$ and a within-class edge probability of $p$. A striking conjecture of Decelle, Krzkala, Moore and Zdeborov\'a based on deep, non-rigorous ideas from statistical physics, gave a precise prediction for the algorithmic threshold of clustering in the sparse planted partition model. In particular, if $p = a/n$ and $q = b/n$, $s=(a-b)/2$ and $p=(a+b)/2$ then Decelle et al.\ conjectured that it is possible to efficiently cluster in a way correlated with the true partition if $s^2 > p$ and impossible if $s^2 < p$. By comparison, the best-known rigorous result is that of Coja-Oghlan, who showed that clustering is possible if $s^2 > C p \ln p$ for some sufficiently large $C$. In a previous work, we proved that indeed it is information theoretically impossible to to cluster if $s^2 < p$ and furthermore it is information theoretically impossible to even estimate the model parameters from the graph when $s^2 < p$. Here we complete the proof of the conjecture by providing an efficient algorithm for clustering in a way that is correlated with the true partition when $s^2 > p$. A different independent proof of the same result was recently obtained by Laurent Massoulie.) <|cite_end|>.
In recent years, an ongoing line of work has aimed to extend these algorithmic techniques to more general semi-random models <|cite_start|> (Reference: Heuristics for Semirandom Graph Problems: We consider semirandom graph models for finding large independent sets, colorings, and bisections in graphs. These models generate problem instances by blending random and adversarial decisions. To generate semirandom independent set problems, an independent set S of ?n vertices is randomly chosen. Each edge connecting S with S is chosen with probability p, and an adversary is then allowed to add new edges arbitrarily, provided that S remains an independent set. The smaller p is, the greater the control the adversary has over the semirandom graph. We give a heuristic that with high probability recovers an independent set of size ?n whenever p> (1+?)lnn/?n, for any constant ?>0. We show that when p<(1??)lnn /?n, an independent set of size |S| cannot be recovered, unless NP?BPP. We use our result for maximum independent sets to obtain greatly improved heuristics for the model of k-colorable semirandom graphs introduced by Blum and Spencer. For constant k, our results are optimal up to constant factors in the edge probabilities. In the semirandom model for graph bisection, a random bisection (S, S) of the vertices is chosen. Each edge (u, v)?S×S is independently chosen with probability q and each edge (u, v)?S×S is independently chosen with probability pq. The adversary may then arbitrarily remove edges in S×S and add edges not in S×S. Extending the work of Boppana, we give a heuristic that recovers this bisection with high probability when p?q?cplogn/n, for c a sufficiently large constant.) <|cite_end|> <|cite_start|> (Reference: How Robust are Reconstruction Thresholds for Community Detection?: The stochastic block model is one of the oldest and most ubiquitous models for studying clustering and community detection. In an exciting sequence of developments, motivated by deep but non-rigorous ideas from statistical physics, Decelle et al. conjectured a sharp threshold for when community detection is possible in the sparse regime. Mossel, Neeman and Sly and Massoulie proved the conjecture and gave matching algorithms and lower bounds. Here we revisit the stochastic block model from the perspective of semirandom models where we allow an adversary to make `helpful' changes that strengthen ties within each community and break ties between them. We show a surprising result that these `helpful' changes can shift the information-theoretic threshold, making the community detection problem strictly harder. We complement this by showing that an algorithm based on semidefinite programming (which was known to get close to the threshold) continues to work in the semirandom model (even for partial recovery). This suggests that algorithms based on semidefinite programming are robust in ways that any algorithm meeting the information-theoretic threshold cannot be. These results point to an interesting new direction: Can we find robust, semirandom analogues to some of the classical, average-case thresholds in statistics? We also explore this question in the broadcast tree model, and we show that the viewpoint of semirandom models can help explain why some algorithms are preferred to others in practice, in spite of the gaps in their statistical performance on random models.) <|cite_end|> <|cite_start|> (Reference: Semidefinite Programs on Sparse Random Graphs and their Application to Community Detection: Denote by $A$ the adjacency matrix of an Erdos-Renyi graph with bounded average degree. We consider the problem of maximizing $\langle A-E\{A\},X\rangle$ over the set of positive semidefinite matrices $X$ with diagonal entries $X_{ii}=1$. We prove that for large (bounded) average degree $d$, the value of this semidefinite program (SDP) is --with high probability-- $2n\sqrt{d} + n\, o(\sqrt{d})+o(n)$. For a random regular graph of degree $d$, we prove that the SDP value is $2n\sqrt{d-1}+o(n)$, matching a spectral upper bound. Informally, Erdos-Renyi graphs appear to behave similarly to random regular graphs for semidefinite programming. We next consider the sparse, two-groups, symmetric community detection problem (also known as planted partition). We establish that SDP achieves the information-theoretically optimal detection threshold for large (bounded) degree. Namely, under this model, the vertex set is partitioned into subsets of size $n/2$, with edge probability $a/n$ (within group) and $b/n$ (across). We prove that SDP detects the partition with high probability provided $(a-b)^2/(4d)> 1+o_{d}(1)$, with $d= (a+b)/2$. By comparison, the information theoretic threshold for detecting the hidden partition is $(a-b)^2/(4d)> 1$: SDP is nearly optimal for large bounded average degree. Our proof is based on tools from different research areas: $(i)$ A new `higher-rank' Grothendieck inequality for symmetric matrices; $(ii)$ An interpolation method inspired from statistical physics; $(iii)$ An analysis of the eigenvectors of deformed Gaussian random matrices.) <|cite_end|> <|cite_start|> (Reference: Robust recovery for stochastic block models: We develop an efficient algorithm for weak recovery in a robust version of the stochastic block model. The algorithm matches the statistical guarantees of the best known algorithms for the vanilla version of the stochastic block model. In this sense, our results show that there is no price of robustness in the stochastic block model. Our work is heavily inspired by recent work of Banks, Mohanty, and Raghavendra (SODA 2021) that provided an efficient algorithm for the corresponding distinguishing problem. Our algorithm and its analysis significantly depart from previous ones for robust recovery. A key challenge is the peculiar optimization landscape underlying our algorithm: The planted partition may be far from optimal in the sense that completely unrelated solutions could achieve the same objective value. This phenomenon is related to the push-out effect at the BBP phase transition for PCA. To the best of our knowledge, our algorithm is the first to achieve robust recovery in the presence of such a push-out effect in a non-asymptotic setting. Our algorithm is an instantiation of a framework based on convex optimization (related to but distinct from sum-of-squares), which may be useful for other robust matrix estimation problems. A by-product of our analysis is a general technique that boosts the probability of success (over the randomness of the input) of an arbitrary robust weak-recovery algorithm from constant (or slowly vanishing) probability to exponentially high probability.) <|cite_end|> <|cite_start|> (Reference: Reaching Kesten-Stigum Threshold in the Stochastic Block Model under Node Corruptions: We study robust community detection in the context of node-corrupted stochastic block model, where an adversary can arbitrarily modify all the edges incident to a fraction of the $n$ vertices. We present the first polynomial-time algorithm that achieves weak recovery at the Kesten-Stigum threshold even in the presence of a small constant fraction of corrupted nodes. Prior to this work, even state-of-the-art robust algorithms were known to break under such node corruption adversaries, when close to the Kesten-Stigum threshold. We further extend our techniques to the $Z_2$ synchronization problem, where our algorithm reaches the optimal recovery threshold in the presence of similar strong adversarial perturbations. The key ingredient of our algorithm is a novel identifiability proof that leverages the push-out effect of the Grothendieck norm of principal submatrices.) <|cite_end|>, first by introducing monotone perturbations <|cite_start|> (Reference: Heuristics for Semirandom Graph Problems: We consider semirandom graph models for finding large independent sets, colorings, and bisections in graphs. These models generate problem instances by blending random and adversarial decisions. To generate semirandom independent set problems, an independent set S of ?n vertices is randomly chosen. Each edge connecting S with S is chosen with probability p, and an adversary is then allowed to add new edges arbitrarily, provided that S remains an independent set. The smaller p is, the greater the control the adversary has over the semirandom graph. We give a heuristic that with high probability recovers an independent set of size ?n whenever p> (1+?)lnn/?n, for any constant ?>0. We show that when p<(1??)lnn /?n, an independent set of size |S| cannot be recovered, unless NP?BPP. We use our result for maximum independent sets to obtain greatly improved heuristics for the model of k-colorable semirandom graphs introduced by Blum and Spencer. For constant k, our results are optimal up to constant factors in the edge probabilities. In the semirandom model for graph bisection, a random bisection (S, S) of the vertices is chosen. Each edge (u, v)?S×S is independently chosen with probability q and each edge (u, v)?S×S is independently chosen with probability pq. The adversary may then arbitrarily remove edges in S×S and add edges not in S×S. Extending the work of Boppana, we give a heuristic that recovers this bisection with high probability when p?q?cplogn/n, for c a sufficiently large constant.) <|cite_end|> <|cite_start|> (Reference: How Robust are Reconstruction Thresholds for Community Detection?: The stochastic block model is one of the oldest and most ubiquitous models for studying clustering and community detection. In an exciting sequence of developments, motivated by deep but non-rigorous ideas from statistical physics, Decelle et al. conjectured a sharp threshold for when community detection is possible in the sparse regime. Mossel, Neeman and Sly and Massoulie proved the conjecture and gave matching algorithms and lower bounds. Here we revisit the stochastic block model from the perspective of semirandom models where we allow an adversary to make `helpful' changes that strengthen ties within each community and break ties between them. We show a surprising result that these `helpful' changes can shift the information-theoretic threshold, making the community detection problem strictly harder. We complement this by showing that an algorithm based on semidefinite programming (which was known to get close to the threshold) continues to work in the semirandom model (even for partial recovery). This suggests that algorithms based on semidefinite programming are robust in ways that any algorithm meeting the information-theoretic threshold cannot be. These results point to an interesting new direction: Can we find robust, semirandom analogues to some of the classical, average-case thresholds in statistics? We also explore this question in the broadcast tree model, and we show that the viewpoint of semirandom models can help explain why some algorithms are preferred to others in practice, in spite of the gaps in their statistical performance on random models.) <|cite_end|> <|cite_start|> (Reference: Achieving the Bayes Error Rate in Stochastic Block Model by SDP, Robustly: We study the statistical performance of the semidefinite programming (SDP) relaxation approach for clustering under the binary symmetric Stochastic Block Model (SBM). We show that the SDP achieves an error rate of the form exp [ −(1− o(1)) 2 ] , where I is an appropriate information-theoretic measure of the signal-to-noise ratio. This bound matches the minimax lower bound on the optimal Bayes error rate for this problem, and improves upon existing results that are sub-optimal by a multiplicative constant in the exponent. As a corollary, our result implies that SDP achieves the optimal exact recovery threshold with the correct leading constant. We further show that this error rate of SDP is robust; that is, it remains unchanged under the so-called semirandom model where the graph is modified by a monotone adversary, as well as under the setting with heterogeneous edge probabilities. Our proof is based on a novel primal-dual analysis of the SDP.) <|cite_end|> <|cite_start|> (Reference: Minimax Rates for Robust Community Detection: In this work, we study the problem of community detection in the stochastic block model with adversarial node corruptions. Our main result is an efficient algorithm that can tolerate an $\epsilon$-fraction of corruptions and achieves error $O(\epsilon) + e^{-\frac{C}{2} (1 \pm o(1))}$ where $C = (\sqrt{a} - \sqrt{b})^2$ is the signal-to-noise ratio and $a/n$ and $b/n$ are the inter-community and intra-community connection probabilities respectively. These bounds essentially match the minimax rates for the SBM without corruptions. We also give robust algorithms for $\mathbb{Z}_2$-synchronization. At the heart of our algorithm is a new semidefinite program that uses global information to robustly boost the accuracy of a rough clustering. Moreover, we show that our algorithms are doubly-robust in the sense that they work in an even more challenging noise model that mixes adversarial corruptions with unbounded monotone changes, from the semi-random model.) <|cite_end|>(a perturbation is monotone with respect to the bipartition $(A, B)$ if it adds edges inside the comunities or remove edges accross communities) and then by allowing a small but constant fraction of adversarially chosen edge | [
"<|reference_start|> Asymptotic analysis of the stochastic block model for modular networks and its algorithmic applications: In this paper we extend our previous work on the stochastic block model, a commonly used generative model for social and biological networks, and the problem of inferring functional groups or communities from the topology of the network. We use the cavity method of statistical physics to obtain an asymptotically exact analysis of the phase diagram. We describe in detail properties of the detectability/undetectability phase transition and the easy/hard phase transition for the community detection problem. Our analysis translates naturally into a belief propagation algorithm for inferring the group memberships of the nodes in an optimal way, i.e., that maximizes the overlap with the underlying group memberships, and learning the underlying parameters of the block model. Finally, we apply the algorithm to two examples of real-world networks and discuss its performance. <|reference_end|>",
"<|reference_start|> Heuristics for Semirandom Graph Problems: We consider semirandom graph models for finding large independent sets, colorings, and bisections in graphs. These models generate problem instances by blending random and adversarial decisions. To generate semirandom independent set problems, an independent set S of ?n vertices is randomly chosen. Each edge connecting S with S is chosen with probability p, and an adversary is then allowed to add new edges arbitrarily, provided that S remains an independent set. The smaller p is, the greater the control the adversary has over the semirandom graph. We give a heuristic that with high probability recovers an independent set of size ?n whenever p> (1+?)lnn/?n, for any constant ?>0. We show that when p<(1??)lnn /?n, an independent set of size |S| cannot be recovered, unless NP?BPP. We use our result for maximum independent sets to obtain greatly improved heuristics for the model of k-colorable semirandom graphs introduced by Blum and Spencer. For constant k, our results are optimal up to constant factors in the edge probabilities. In the semirandom model for graph bisection, a random bisection (S, S) of the vertices is chosen. Each edge (u, v)?S×S is independently chosen with probability q and each edge (u, v)?S×S is independently chosen with probability pq. The adversary may then arbitrarily remove edges in S×S and add edges not in S×S. Extending the work of Boppana, we give a heuristic that recovers this bisection with high probability when p?q?cplogn/n, for c a sufficiently large constant. <|reference_end|>",
"<|reference_start|> Robust recovery for stochastic block models: We develop an efficient algorithm for weak recovery in a robust version of the stochastic block model. The algorithm matches the statistical guarantees of the best known algorithms for the vanilla version of the stochastic block model. In this sense, our results show that there is no price of robustness in the stochastic block model. Our work is heavily inspired by recent work of Banks, Mohanty, and Raghavendra (SODA 2021) that provided an efficient algorithm for the corresponding distinguishing problem. Our algorithm and its analysis significantly depart from previous ones for robust recovery. A key challenge is the peculiar optimization landscape underlying our algorithm: The planted partition may be far from optimal in the sense that completely unrelated solutions could achieve the same objective value. This phenomenon is related to the push-out effect at the BBP phase transition for PCA. To the best of our knowledge, our algorithm is the first to achieve robust recovery in the presence of such a push-out effect in a non-asymptotic setting. Our algorithm is an instantiation of a framework based on convex optimization (related to but distinct from sum-of-squares), which may be useful for other robust matrix estimation problems. A by-product of our analysis is a general technique that boosts the probability of success (over the randomness of the input) of an arbitrary robust weak-recovery algorithm from constant (or slowly vanishing) probability to exponentially high probability. <|reference_end|>",
"<|reference_start|> Reaching Kesten-Stigum Threshold in the Stochastic Block Model under Node Corruptions: We study robust community detection in the context of node-corrupted stochastic block model, where an adversary can arbitrarily modify all the edges incident to a fraction of the $n$ vertices. We present the first polynomial-time algorithm that achieves weak recovery at the Kesten-Stigum threshold even in the presence of a small constant fraction of corrupted nodes. Prior to this work, even state-of-the-art robust algorithms were known to break under such node corruption adversaries, when close to the Kesten-Stigum threshold. We further extend our techniques to the $Z_2$ synchronization problem, where our algorithm reaches the optimal recovery threshold in the presence of similar strong adversarial perturbations. The key ingredient of our algorithm is a novel identifiability proof that leverages the push-out effect of the Grothendieck norm of principal submatrices. <|reference_end|>"
] | [
36,
41,
44,
45
] | {"<|cite_1|>": "ss-1200698", "<|cite_2|>": "ss-1025670", "<|cite_3|>": "ss-1851008", "<|cite_4|>": "ss-1365153", "<|cite_5|>": "ss-1242827", "<|multi_cite_6_1|>": "ss-760816", "<|multi_cite_6_2|>": "ss-1359765", "<|multi_cite_7_1|>": "ss-1003654", "<|multi_cite_7_2|>": "ss-1031800", "<|multi_cite_7_3|>": "ss-821539", "<|multi_cite_7_4|>": "arxiv-17267", "<|multi_cite_8_1|>": "ss-1982078", "<|multi_cite_8_2|>": "ss-902614", "<|multi_cite_8_3|>": "ss-1851009", "<|multi_cite_8_4|>": "ss-827313", "<|multi_cite_8_5|>": "ss-1530422", "<|multi_cite_9_1|>": "ss-1718937", "<|multi_cite_9_2|>": "ss-1851010", "<|multi_cite_9_3|>": "arxiv-7873", "<|cite_10|>": "ss-1316980", "<|multi_cite_11_1|>": "arxiv-86574", "<|multi_cite_11_2|>": "arxiv-436073", "<|cite_12|>": "ss-1316980", "<|cite_13|>": "ss-1316980", "<|cite_14|>": "ss-1316980", "<|cite_15|>": "ss-1316980", "<|cite_16|>": "ss-1316980", "<|cite_17|>": "ss-1316980", "<|multi_cite_18_1|>": "arxiv-7873", "<|multi_cite_18_2|>": "ss-1372435", "<|cite_19|>": "arxiv-121092", "<|cite_20|>": "arxiv-121092", "<|cite_21|>": "arxiv-85700", "<|cite_22|>": "ss-1103643", "<|cite_23|>": "arxiv-121092", "<|cite_24|>": "ss-955333", "<|multi_cite_25_1|>": "arxiv-24612", "<|multi_cite_25_2|>": "arxiv-52653", "<|multi_cite_25_3|>": "ss-1851011", "<|multi_cite_25_4|>": "ss-835171", "<|multi_cite_25_5|>": "arxiv-52827", "<|multi_cite_26_1|>": "ss-827313", "<|multi_cite_26_2|>": "arxiv-86574", "<|multi_cite_26_3|>": "arxiv-76516", "<|multi_cite_26_4|>": "arxiv-381425", "<|multi_cite_26_5|>": "arxiv-505695", "<|multi_cite_27_1|>": "ss-827313", "<|multi_cite_27_2|>": "arxiv-86574", "<|multi_cite_27_3|>": "ss-1851012", "<|multi_cite_27_4|>": "arxiv-436073", "<|cite_28|>": "arxiv-381425", "<|multi_cite_29_1|>": "arxiv-436073", "<|multi_cite_29_2|>": "arxiv-505695", "<|cite_30|>": "ss-1851013", "<|cite_31|>": "ss-1316980", "<|cite_32|>": "arxiv-360224", "<|cite_33|>": "arxiv-200953", "<|cite_34|>": "arxiv-402585", "<|cite_35|>": "arxiv-402585", "<|multi_cite_36_1|>": "ss-685209", "<|multi_cite_36_2|>": "ss-1851014", "<|multi_cite_37_1|>": "ss-685209", "<|multi_cite_37_2|>": "ss-1851014", "<|cite_38|>": "ss-685209", "<|cite_39|>": "ss-1107572", "<|cite_40|>": "ss-1851014", "<|cite_41|>": "ss-1851014", "<|cite_42|>": "ss-1851014"} |
1711.10589 | <|paper_start|> Title: Contextual Outlier Interpretation
Abstract: Contextual Outlier Interpretation: Outlier detection plays an essential role in many data-driven applications to identify isolated instances that are different from the majority. While many statistical learning and data mining techniques have been used for developing more effective outlier detection algorithms, the interpretation of detected outliers does not receive much attention. Interpretation is becoming increasingly important to help people trust and evaluate the developed models through providing intrinsic reasons why the certain outliers are chosen. It is difficult, if not impossible, to simply apply feature selection for explaining outliers due to the distinct characteristics of various detection models, complicated structures of data in certain applications, and imbalanced distribution of outliers and normal instances. In addition, the role of contrastive contexts where outliers locate, as well as the relation between outliers and contexts, are usually overlooked in interpretation. To tackle the issues above, in this paper, we propose a novel Contextual Outlier INterpretation (COIN) method to explain the abnormality of existing outliers spotted by detectors. The interpretability for an outlier is achieved from three aspects: outlierness score, attributes that contribute to the abnormality, and contextual description of its neighborhoods. Experimental results on various types of datasets demonstrate the flexibility and effectiveness of the proposed framework compared with existing interpretation approaches.
Introduction
Outlier detection has become a fundamental task in many data-driven applications. Outliers refer to isolated instances that do not conform to expected normal patterns in a dataset <|cite_start|> (Reference: Anomaly detection: {{A Survey}}: Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.) <|cite_end|> <|cite_start|> (Reference: On the evaluation of unsupervised outlier detection: measures, datasets, and an empirical study: ) <|cite_end|>. Typical examples include notable human behaviors in static environment, online spam detection <|cite_start|> (Reference: Detecting collusive spamming activities in community question answering: Community Question Answering (CQA) portals provide rich sources of information on a variety of topics. However, the authenticity and quality of questions and answers (Q&As) has proven hard to control. In a troubling direction, the widespread growth of crowdsourcing websites has created a large-scale, potentially difficult-to-detect workforce to manipulate malicious contents in CQA. The crowd workers who join the same crowdsourcing task about promotion campaigns in CQA collusively manipulate deceptive Q&As for promoting a target (product or service). The collusive spamming group can fully control the sentiment of the target. How to utilize the structure and the attributes for detecting manipulated Q&As? How to detect the collusive group and leverage the group information for the detection task? To shed light on these research questions, we propose a unified framework to tackle the challenge of detecting collusive spamming activities of CQA. First, we interpret the questions and answers in CQA as two independent networks. Second, we detect collusive question groups and answer groups from these two networks respectively by measuring the similarity of the contents posted within a short duration. Third, using attributes (individual-level and group-level) and correlations (user-based and content-based), we proposed a combined factor graph model to detect deceptive Q&As simultaneously by combining two independent factor graphs. With a large-scale practical data set, we find that the proposed framework can detect deceptive contents at early stage, and outperforms a number of competitive baselines.) <|cite_end|> <|cite_start|> (Reference: FLOCK: Combating Astroturfing on Livestreaming Platforms: Livestreaming platforms have become increasingly popular in recent years as a means of sharing and advertising creative content. Popular content streamers who attract large viewership to their live broadcasts can earn a living by means of ad revenue, donations and channel subscriptions. Unfortunately, this incentivized popularity has simultaneously resulted in incentive for fraudsters to provide services to astroturf, or artificially inflate viewership metrics by providing fake "live" views to customers. Our work provides a number of major contributions: (a) formulation: we are the first to introduce and characterize the viewbot fraud problem in livestreaming platforms, (b) methodology: we propose FLOCK, a principled and unsupervised method which efficiently and effectively identifies botted broadcasts and their constituent botted views, and (c) practicality: our approach achieves over 98% precision in identifying botted broadcasts and over 90% precision/recall against sizable synthetically generated viewbot attacks on a real-world livestreaming workload of over 16 million views and 92 thousand broadcasts. FLOCK successfully operates on larger datasets in practice and is regularly used at a large, undisclosed livestreaming corporation.) <|cite_end|> <|cite_start|> (Reference: Enquiring Minds: Early Detection of Rumors in Social Media from Enquiry Posts: Many previous techniques identify trending topics in social media, even topics that are not pre-defined. We present a technique to identify trending rumors, which we define as topics that include disputed factual claims. Putting aside any attempt to assess whether the rumors are true or false, it is valuable to identify trending rumors as early as possible. It is extremely difficult to accurately classify whether every individual post is or is not making a disputed factual claim. We are able to identify trending rumors by recasting the problem as finding entire clusters of posts whose topic is a disputed factual claim. The key insight is that when there is a rumor, even though most posts do not raise questions about it, there may be a few that do. If we can find signature text phrases that are used by a few people to express skepticism about factual claims and are rarely used to express anything else, we can use those as detectors for rumor clusters. Indeed, we have found a few phrases that seem to be used exactly that way, including: "Is this true?", "Really?", and "What?". Relatively few posts related to any particular rumor use any of these enquiry phrases, but lots of rumor diffusion processes have some posts that do and have them quite early in the diffusion. We have developed a technique based on searching for the enquiry phrases, clustering similar posts together, and then collecting related posts that do not contain these simple phrases. We then rank the clusters by their likelihood of really containing a disputed factual claim. The detector, which searches for the very rare but very informative phrases, combined with clustering and a classifier on the clusters, yields surprisingly good performance. On a typical day of Twitter, about a third of the top 50 clusters were judged to be rumors, a high enough precision that human analysts might be willing to sift through them.) <|cite_end|>, public disease outbreaks <|cite_start|> (Reference: Rule-based anomaly pattern detection for detecting disease outbreaks: This paper presents an algorithm for performing early detection of disease outbreaks by searching a database of emergency department cases for anomalous patterns. Traditional techniques for anomaly detection are unsatisfactory for this problem because they identify individual data points that are rare due to particular combinations of features. When applied to our scenario, these traditional algorithms discover isolated outliers of particularly strange events, such as someone accidentally shooting their ear, that are not indicative of a new outbreak. Instead, we would like to detect anomalous patterns. These patterns are groups with specific characteristics whose recent pattern of illness is anomalous relative to historical patterns. We propose using a rule-based anomaly detection algorithm that characterizes each anomalous pattern with a rule. The significance of each rule is carefully evaluated using Fisher's Exact Test and a randomization test. Our algorithm is compared against a standard detection algorithm by measuring the number of false positives and the timeliness of detection. Simulated data, produced by a simulator that creates the effects of an epidemic on a city, is used for evaluation. The results indicate that our algorithm has significantly better detection times for common significance thresholds while having a slightly higher false positive rate.) <|cite_end|>, and dramatic changes in temporal signals <|cite_start|> (Reference: Topic-conditioned novelty detection: Automated detection of the first document reporting each new event in temporally-sequenced streams of documents is an open challenge. In this paper we propose a new approach which addresses this problem in two stages: 1) using a supervised learning algorithm to classify the on-line document stream into pre-defined broad topic categories, and 2) performing topic-conditioned novelty detection for documents in each topic. We also focus on exploiting named-entities for event-level novelty detection and using feature-based heuristics derived from the topic histories. Evaluating these methods using a set of broadcast news stories, our results show substantial performance gains over the traditional one-level approach to the novelty detection problem.) <|cite_end|> <|cite_start|> (Reference: Online Novelty Detection on Temporal Sequences: In this paper, we present a new framework for online novelty detection on temporal sequences. This framework include a mechanism for associating each detection result with a confidence value. Based on this framework, we develop a concrete online detection algorithm, by modeling the temporal sequence using an online support vector regression algorithm. Experiments on both synthetic and real world data are performed to demonstrate the promising performance of our proposed detection algorithm.) <|cite_end|>. In addition, outlier detection also plays an essential role in detecting malevolence and contamination towards a secure and trustworthy cyberspace, including detecting spammers in social media <|cite_start|> (Reference: oddball: Spotting Anomalies in Weighted Graphs: ) <|cite_end|> and fraudsters in financial systems <|cite_start|> (Reference: A Comprehensive Survey of Data Mining-based Fraud Detection Research: This survey paper categorises, compares, and summarises from almost all published technical and review articles in automated fraud detection within the last 10 years. It defines the professional fraudster, formalises the main types and subtypes of known fraud, and presents the nature of data evidence collected within affected industries. Within the business context of mining the data to achieve higher cost savings, this research presents methods and techniques together with their problems. Compared to all related reviews on fraud detection, this survey covers much more technical articles and is the only one, to the best of our knowledge, which proposes alternative data and solutions from related domains.) <|cite_end|>.
Complementing existing work, enabling interpretability could benefit outlier detection and analysis in several aspects. First, interpretation helps bridging the gap between detecting outliers and identifying domain-specific anomalies. Outlier detection can output data instances with rare and noteworthy patterns, but in many applications we still rely on domain experts to manually select domain-specific anomalies out of the outliers that they actually care about in the current application. For example, in e-commerce website monitoring, outlier detection can be applied for discovering users or merchants with rare behaviors, but administrators need to check the results to select those involved in malicious activities such as fraud. Interpretation for the detected outliers, which provides reasons for outlierness, can significantly save the effort of such manual inspection. Second, interpretation can be used in the evaluation process to complement current metrics such as the area under ROC curve (AUC) and nDCG <|cite_start|> (Reference: The relationship between precision-recall and ROC curves: Receiver Operator Characteristic (ROC) curves are commonly used to present results for binary decision problems in machine learning. However, when dealing with highly skewed datasets, Precision-Recall (PR) curves give a more informative picture of an algorithm's performance. We show that a deep connection exists between ROC space and PR space, such that a curve dominates in ROC space if and only if it dominates in PR space. A corollary is the notion of an achievable PR curve, which has properties much like the convex hull in ROC space; we show an efficient algorithm for computing this curve. Finally, we also note differences in the two types of curves are significant for algorithm design. For example, in PR space it is incorrect to linearly interpolate between points. Furthermore, algorithms that optimize the area under the ROC curve are not guaranteed to optimize the area under the PR curve.) <|cite_end|> which provide limited information about characteristics of the detected outliers. Third, a detection method that works well in one dataset or application is not guaranteed to have good performance in others. Unlike supervised learning methods, outlier detection is usually performed using unsupervised methods and cannot be evaluated in the same way. Thus, outlier interpretation would facilitate the usability of outlier detection techniques in real-world applications.
To this end, one straightforward way for outlier interpretation is to apply feature selection to identify a subset of features that distinguish outliers from normal instances <|cite_start|> (Reference: Finding Intensional Knowledge of Distance-Based Outliers: ) <|cite_end|> <|cite_start|> (Reference: Explaining outliers by subspace separability: Outliers are extraordinary objects in a data collection. Depending on the domain, they may represent errors, fraudulent activities or rare events that are subject of our interest. Existing approaches focus on detection of outliers or degrees of outlierness (ranking), but do not provide a possible explanation of how these objects deviate from the rest of the data. Such explanations would help user to interpret or validate the detected outliers. The problem addressed in this paper is as follows: given an outlier detected by an existing algorithm, we propose a method that determines possible explanations for the outlier. These explanations are expressed in the form of subspaces in which the given outlier shows separability from the inliers. In this manner, our proposed method complements existing outlier detection algorithms by providing additional information about the outliers. Our method is designed to work with any existing outlier detection algorithm and it also includes a heuristic that gives a substantial speedup over the baseline strategy.) <|cite_end|> <|cite_start|> (Reference: Mining Contrast Subspaces: ) <|cite_end|> <|cite_start|> (Reference: Discovering outlying aspects in large datasets: ) <|cite_end|>. However, first it is difficult for some existing methods to efficiently handle datasets of large size or high dimensions <|cite_start|> (Reference: Discovering outlying aspects in large datasets: ) <|cite_end|>, or effectively obtain interpretations from complex data types and distributions <|cite_start|> (Reference: Finding Intensional Knowledge of Distance-Based Outliers: ) <|cite_end|>. Second, we measure the abnormality degrees of outliers through interpretation, which is important in many applications where some actions may be taken to outliers with higher priority. Some detectors only output binary labels indicating whether each data instance is an outlier. Sometimes continuous outlier scores are provided, but they are usually in different scales for different detection methods. A unified scoring mechanism by interpretation will facilitate the comparisons among various detectors. Third, besides identifying the notable attributes of outliers, we also analyze the context (e.g., contrastive neighborhood) in which outliers are detected. ``It takes two to tango." Discovering the relations between an outlier and its context would provide richer information before taking actions to deal with the detected outliers in real applications.
To tackle the aforementioned challenges, in this paper, we propose a novel Contextual Outlier INterpretation (\coin) approach to provide explanations for outliers identified by detectors. We define the interpretation of an outlier as the triple of noteworthy features, the degree of outlierness and the contrastive context with respect to the outlier query. The first two components are extracted from the relations between the outlier and its context. Also, the interpretations of all outliers can be integrated for evaluating the given outlier detection model. The performance of different detectors can also be compared through interpretations as COIN provides a unified evaluation basis. COIN can also be applied to existing outlier/anomaly detection methods which already provide explanations for their results. In addition, prior knowledge of attribute characteristics about certain application scenarios can be easily incorporated into the interpretation process, which enables end users to perform model selection according to specific demands.
The contributions of this work are summarized as follows:
\begin{itemize}[leftmargin=*]
\item We define the interpretation of an outlier as three aspects: abnormal attributes, outlierness score, and the identification of the outlier's local context.
\item We propose a novel model-agnostic framework to interpret outliers, as well as designing a concrete model within the framework to extract interpretation information.
\item Comprehensive evaluations on interpretation quality, as well as case studies, are conducted through experiments on both real-world and synthetic datasets.
\end{itemize}
\begin{table}[t]
\begin{center}
\begin{tabular}{ c|l }
\hline \hline
\multicolumn{1}{c}{\bfseries Notation} & \multicolumn{1}{|c}{\bfseries Definition} \\
\hline
$N$ & the number of data points in the dataset \\
\hline
$M$ & the number of attributes \\
\hline
$\textbf{x}$ & a data instance, $\textbf{x}\in \mathbb{R}^M$ \\
\hline
$a_m$ & the $m$-th attribute \\
\hline
$\mathscr{X}$ & all data instances, $\mathscr{X} = \{\textbf{x}_1, \textbf{x}_2, ..., \textbf{x}_N\}$ \\
\hline
$h$ & an outlier detection method \\
\hline
$\mathscr{O}$ & the collection of detected outliers \\
\hline
$\textbf{o}_i$ & outlier $i$ identified by the detector \\
\hline
$\mathscr{O}_i$ & the outlier class corresponding to $\textbf{o}_i$ \\
\hline
$\mathscr{C}_{i}$ & the context of outlier $\textbf{o}_i$ \\
\hline
$k$ & the number instances included in $\mathscr{C}_{i}$ \\
\hline
$s(a_m)$ & suspicious score of attribute $a_m$ \\
\hline
$d(\textbf{o})$ & outlierness score of $\textbf{o}$ \\
\hline \hline
\end{tabular}
\end{center}
\caption{Symbols and Notations}
\vspace{-20pt}
\label{tb:notation}
\end{table}
Related Work
Many outlier detection approaches have been developed over the past decades. These approaches can be divided into three categories: density-based, distance-based and model-based approaches. Some notable density-based detection methods include <|cite_start|> (Reference: {LOF: Identifying Density-Based Local Outliers: For many KDD applications, such as detecting criminal activities in E-commerce, finding the rare instances or the outliers, can be more interesting than finding the common patterns. Existing work in outlier detection regards being an outlier as a binary property. In this paper, we contend that for many scenarios, it is more meaningful to assign to each object a degree of being an outlier. This degree is called the local outlier factor (LOF) of an object. It is local in that the degree depends on how isolated the object is with respect to the surrounding neighborhood. We give a detailed formal analysis showing that LOF enjoys many desirable properties. Using real-world datasets, we demonstrate that LOF can be used to find outliers which appear to be meaningful, but can otherwise not be identified with existing approaches. Finally, a careful performance evaluation of our algorithm confirms we show that our approach of finding local outliers can be practical.) <|cite_end|> <|cite_start|> (Reference: {Outlier Detection for High Dimensional Data: The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are high dimensional domains in which the data can contain hundreds of dimensions. Many recent algorithms use concepts of proximity in order to find outliers based on their relationship to the rest of the data. However, in high dimensional space, the data is sparse and the notion of proximity fails to retain its meaningfulness. In fact, the sparsity of high dimensional data implies that every point is an almost equally good outlier from the perspective of proximity-based definitions. Consequently, for high dimensional data, the notion of finding meaningful outliers becomes substantially more complex and non-obvious. In this paper, we discuss new techniques for outlier detection which find the outliers by studying the behavior of projections from the data set.) <|cite_end|> <|cite_start|> (Reference: Enhancing Effectiveness of Outlier Detections for Low Density Patterns: ) <|cite_end|> <|cite_start|> (Reference: Conditional anomaly detection: When anomaly detection software is used as a data analysis tool, finding the hardest-to-detect anomalies is not the most critical task. Rather, it is often more important to make sure that those anomalies that are reported to the user are in fact interesting. If too many unremarkable data points are returned to the user labeled as candidate anomalies, the software can soon fall into disuse. One way to ensure that returned anomalies are useful is to make use of domain knowledge provided by the user. Often, the data in question includes a set of environmental attributes whose values a user would never consider to be directly indicative of an anomaly. However, such attributes cannot be ignored because they have a direct effect on the expected distribution of the result attributes whose values can indicate an anomalous observation. This paper describes a general purpose method called conditional anomaly detection for taking such differences among attributes into account, and proposes three different expectation-maximization algorithms for learning the model that is used in conditional anomaly detection. Experiments with more than 13 different data sets compare our algorithms with several other more standard methods for outlier or anomaly detection) <|cite_end|> <|cite_start|> (Reference: {On Community Outliers and Their Efficient Detection in Information Networks: Linked or networked data are ubiquitous in many applications. Examples include web data or hypertext documents connected via hyperlinks, social networks or user profiles connected via friend links, co-authorship and citation information, blog data, movie reviews and so on. In these datasets (called "information networks"), closely related objects that share the same properties or interests form a community. For example, a community in blogsphere could be users mostly interested in cell phone reviews and news. Outlier detection in information networks can reveal important anomalous and interesting behaviors that are not obvious if community information is ignored. An example could be a low-income person being friends with many rich people even though his income is not anomalously low when considered over the entire population. This paper first introduces the concept of community outliers (interesting points or rising stars for a more positive sense), and then shows that well-known baseline approaches without considering links or community information cannot find these community outliers. We propose an efficient solution by modeling networked data as a mixture model composed of multiple normal communities and a set of randomly generated outliers. The probabilistic model characterizes both data and links simultaneously by defining their joint distribution based on hidden Markov random fields (HMRF). Maximizing the data likelihood and the posterior of the model gives the solution to the outlier inference problem. We apply the model on both synthetic data and DBLP data sets, and the results demonstrate importance of this concept, as well as the effectiveness and efficiency of the proposed approach.) <|cite_end|>. Representative distance-based approaches include <|cite_start|> (Reference: Distance-based outliers: algorithms and applications: ) <|cite_end|> <|cite_start|> (Reference: {Efficient algorithms for mining outliers from large data sets: In this paper, we propose a novel formulation for distance-based outliers that is based on the distance of a point from its kth nearest neighbor. We rank each point on the basis of its distance to its kth nearest neighbor and declare the top n points in this ranking to be outliers. In addition to developing relatively straightforward solutions to finding such outliers based on the classical nested-loop join and index join algorithms, we develop a highly efficient partition-based algorithm for mining outliers. This algorithm first partitions the input data set into disjoint subsets, and then prunes entire partitions as soon as it is determined that they cannot contain outliers. This results in substantial savings in computation. We present the results of an extensive experimental study on real-life and synthetic data sets. The results from a real-life NBA database highlight and reveal several expected and unexpected aspects of the database. The results from a study on synthetic data sets demonstrate that the partition-based algorithm scales well with respect to both data set size and data set dimensionality.) <|cite_end|> <|cite_start|> (Reference: Mining Distance-Based Outliers in near Linear Time with Randomization and a Simple Pruning Rule: Defining outliers by their distance to neighboring examples is a popular approach to finding unusual examples in a data set. Recently, much work has been conducted with the goal of finding fast algorithms for this task. We show that a simple nested loop algorithm that in the worst case is quadratic can give near linear time performance when the data is in random order and a simple pruning rule is used. We test our algorithm on real high-dimensional data sets with millions of examples and show that the near linear scaling holds over several orders of magnitude. Our average case analysis suggests that much of the efficiency is because the time to process non-outliers, which are the majority of examples, does not depend on the size of the data set.) <|cite_end|> <|cite_start|> (Reference: Dolphin: an efficient algorithm for mining distance-based outliers in very large datasets: In this work a novel distance-based outlier detection algorithm, named DOLPHIN, working on disk-resident datasets and whose I/O cost corresponds to the cost of sequentially reading the input dataset file twice, is presented.
It is both theoretically and empirically shown that the main memory usage of DOLPHIN amounts to a small fraction of the dataset and that DOLPHIN has linear time performance with respect to the dataset size. DOLPHIN gains efficiency by naturally merging together in a unified schema three strategies, namely the selection policy of objects to be maintained in main memory, usage of pruning rules, and similarity search techniques. Importantly, similarity search is accomplished by the algorithm without the need of preliminarily indexing the whole dataset, as other methods do.
The algorithm is simple to implement and it can be used with any type of data, belonging to either metric or nonmetric spaces. Moreover, a modification to the basic method allows DOLPHIN to deal with the scenario in which the available buffer of main memory is smaller than its standard requirements. DOLPHIN has been compared with state-of-the-art distance-based outlier detection algorithms, showing that it is much more efficient.) <|cite_end|> <|cite_start|> (Reference: Isolation-{{Based Anomaly Detection}}: Anomalies are data points that are few and different. As a result of these properties, we show that, anomalies are susceptible to a mechanism called isolation. This article proposes a method called Isolation Forest (iForest), which detects anomalies purely based on the concept of isolation without employing any distance or density measure---fundamentally different from all existing methods.
As a result, iForest is able to exploit subsampling (i) to achieve a low linear time-complexity and a small memory-requirement and (ii) to deal with the effects of swamping and masking effectively. Our empirical evaluation shows that iForest outperforms ORCA, one-class SVM, LOF and Random Forests in terms of AUC, processing time, and it is robust against masking and swamping effects. iForest also works well in high dimensional problems containing a large number of irrelevant attributes, and when anomalies are not available in training sample.) <|cite_end|>. For model-based approaches, some well-known examples are <|cite_start|> (Reference: Estimating the Support of a High-dimensional Distribution: Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a simple subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data.) <|cite_end|> <|cite_start|> (Reference: Discovering cluster-based local outliers: ) <|cite_end|> <|cite_start|> (Reference: Non-Negative Residual Matrix Factorization with Application to Graph
Anomaly Detection: Given an IP source-destination traffic network, how do we spot mis-behavioral IP sources (e.g., port-scanner)? How do we find strange users in a user-movie rating graph? Moreover, how can we present the results intuitively so that it is relatively easier for data analysts to interpret? We propose NrMF, a non-negative residual matrix factorization framework, to address such challenges. We present an optimization formulation as well as an effective algorithm to solve it. Our method can naturally capture abnormal behaviors on graphs. In addition, the proposed algorithm is linear wrt the size of the graph therefore it is suitable for large graphs. The experimental results on several data sets validate its effectiveness as well as efficiency.) <|cite_end|>. Varios approaches have been proposed to tackle the challenges including the curse of dimensionality <|cite_start|> (Reference: {Outlier Detection for High Dimensional Data: The outlier detection problem has important applications in the field of fraud detection, network robustness analysis, and intrusion detection. Most such applications are high dimensional domains in which the data can contain hundreds of dimensions. Many recent algorithms use concepts of proximity in order to find outliers based on their relationship to the rest of the data. However, in high dimensional space, the data is sparse and the notion of proximity fails to retain its meaningfulness. In fact, the sparsity of high dimensional data implies that every point is an almost equally good outlier from the perspective of proximity-based definitions. Consequently, for high dimensional data, the notion of finding meaningful outliers becomes substantially more complex and non-obvious. In this paper, we discuss new techniques for outlier detection which find the outliers by studying the behavior of projections from the data set.) <|cite_end|> <|cite_start|> (Reference: Outlier identification in high dimensions: ) <|cite_end|> <|cite_start|> (Reference: Clustering high-dimensional data: A survey on subspace clustering, pattern-based clustering, and correlation clustering: As a prolific research area in data mining, subspace clustering and related problems induced a vast quantity of proposed solutions. However, many publications compare a new proposition—if at all—with one or two competitors, or even with a so-called “naïve” ad hoc solution, but fail to clarify the exact problem definition. As a consequence, even if two solutions are thoroughly compared experimentally, it will often remain unclear whether both solutions tackle the same problem or, if they do, whether they agree in certain tacit assumptions and how such assumptions may influence the outcome of an algorithm. In this survey, we try to clarify: (i) the different problem definitions related to subspace clustering in general; (ii) the specific difficulties encountered in this field of research; (iii) the varying assumptions, heuristics, and intuitions forming the basis of different approaches; and (iv) how several prominent solutions tackle different problems.) <|cite_end|>, the massive data volumn <|cite_start|> (Reference: {Efficient algorithms for mining outliers from large data sets: In this paper, we propose a novel formulation for distance-based outliers that is based on the distance of a point from its kth nearest neighbor. We rank each point on the basis of its distance to its kth nearest neighbor and declare the top n points in this ranking to be outliers. In addition to developing relatively straightforward solutions to finding such outliers based on the classical nested-loop join and index join algorithms, we develop a highly efficient partition-based algorithm for mining outliers. This algorithm first partitions the input data set into disjoint subsets, and then prunes entire partitions as soon as it is determined that they cannot contain outliers. This results in substantial savings in computation. We present the results of an extensive experimental study on real-life and synthetic data sets. The results from a real-life NBA database highlight and reveal several expected and unexpected aspects of the database. The results from a study on synthetic data sets demonstrate that the partition-based algorithm scales well with respect to both data set size and data set dimensionality.) <|cite_end|> <|cite_start|> (Reference: Dolphin: an efficient algorithm for mining distance-based outliers in very large datasets: In this work a novel distance-based outlier detection algorithm, named DOLPHIN, working on disk-resident datasets and whose I/O cost corresponds to the cost of sequentially reading the input dataset file twice, is presented.
It is both theoretically and empirically shown that the main memory usage of DOLPHIN amounts to a small fraction of the dataset and that DOLPHIN has linear time performance with respect to the dataset size. DOLPHIN gains efficiency by naturally merging together in a unified schema three strategies, namely the selection policy of objects to be maintained in main memory, usage of pruning rules, and similarity search techniques. Importantly, similarity search is accomplished by the algorithm without the need of preliminarily indexing the whole dataset, as other methods do.
The algorithm is simple to implement and it can be used with any type of data, belonging to either metric or nonmetric spaces. Moreover, a modification to the basic method allows DOLPHIN to deal with the scenario in which the available buffer of main memory is smaller than its standard requirements. DOLPHIN has been compared with state-of-the-art distance-based outlier detection algorithms, showing that it is much more efficient.) <|cite_end|>, and heterogenous information sources <|cite_start|> (Reference: {On Community Outliers and Their Efficient Detection in Information Networks: Linked or networked data are ubiquitous in many applications. Examples include web data or hypertext documents connected via hyperlinks, social networks or user profiles connected via friend links, co-authorship and citation information, blog data, movie reviews and so on. In these datasets (called "information networks"), closely related objects that share the same properties or interests form a community. For example, a community in blogsphere could be users mostly interested in cell phone reviews and news. Outlier detection in information networks can reveal important anomalous and interesting behaviors that are not obvious if community information is ignored. An example could be a low-income person being friends with many rich people even though his income is not anomalously low when considered over the entire population. This paper first introduces the concept of community outliers (interesting points or rising stars for a more positive sense), and then shows that well-known baseline approaches without considering links or community information cannot find these community outliers. We propose an efficient solution by modeling networked data as a mixture model composed of multiple normal communities and a set of randomly generated outliers. The probabilistic model characterizes both data and links simultaneously by defining their joint distribution based on hidden Markov random fields (HMRF). Maximizing the data likelihood and the posterior of the model gives the solution to the outlier inference problem. We apply the model on both synthetic data and DBLP data sets, and the results demonstrate importance of this concept, as well as the effectiveness and efficiency of the proposed approach.) <|cite_end|> <|cite_start|> (Reference: Scalable Anomaly Ranking of Attributed Neighborhoods: Given a graph with node attributes, what neighborhoods are anomalous? To answer this question, one needs a quality score that utilizes both structure and attributes. Popular existing measures either quantify the structure only and ignore the attributes (e.g., conductance), or only consider the connectedness of the nodes inside the neighborhood and ignore the cross-edges at the boundary (e.g., density). In this work we propose normality, a new quality measure for attributed neighborhoods. Normality utilizes structure and attributes together to quantify both internal consistency and external separability. It exhibits two key advantages over other measures: (1) It allows many boundary-edges as long as they can be "exonerated"; i.e., either (i) are expected under a null model, and/or (ii) the boundary nodes do not exhibit the subset of attributes shared by the neighborhood members. Existing measures, in contrast, penalize boundary edges irrespectively. (2) Normality can be efficiently maximized to automatically infer the shared attribute subspace (and respective weights) that characterize a neighborhood. This efficient optimization allows us to process graphs with millions of attributes. We capitalize on our measure to present a novel approach for Anomaly Mining of Entity Neighborhoods (AMEN). Experiments on real-world attributed graphs illustrate the effectiveness of our measure at anomaly detection, outperforming popular approaches including conductance, density, OddBall, and SODA. In addition to anomaly detection, our qualitative analysis demonstrates the utility of normality as a powerful tool to contrast the correlation between structure and attributes across different graphs.) <|cite_end|>. Ensemble learning, which is widely used in supervised learning settings, can also be applied for outlier detection with non-trivial improvements in performance <|cite_start|> (Reference: Subsampling for efficient and effective unsupervised outlier detection ensembles: Outlier detection and ensemble learning are well established research directions in data mining yet the application of ensemble techniques to outlier detection has been rarely studied. Here, we propose and study subsampling as a technique to induce diversity among individual outlier detectors. We show analytically and experimentally that an outlier detector based on a subsample per se, besides inducing diversity, can, under certain conditions, already improve upon the results of the same outlier detector on the complete dataset. Building an ensemble on top of several subsamples is further improving the results. While in the literature so far the intuition that ensembles improve over single outlier detectors has just been transferred from the classification literature, here we also justify analytically why ensembles are also expected to work in the unsupervised area of outlier detection. As a side effect, running an ensemble of several outlier detectors on subsamples of the dataset is more efficient than ensembles based on other means of introducing diversity and, depending on the sample rate and the size of the ensemble, can be even more efficient than just the single outlier detector on the complete data.) <|cite_end|> <|cite_start|> (Reference: Robust Contextual Outlier Detection: Where Context Meets Sparsity: Outlier detection is a fundamental data science task with applications ranging from data cleaning to network security. Given the fundamental nature of the task, this has been the subject of much research. Recently, a new class of outlier detection algorithms has emerged, called {\it contextual outlier detection}, and has shown improved performance when studying anomalous behavior in a specific context. However, as we point out in this article, such approaches have limited applicability in situations where the context is sparse (i.e. lacking a suitable frame of reference). Moreover, approaches developed to date do not scale to large datasets. To address these problems, here we propose a novel and robust approach alternative to the state-of-the-art called RObust Contextual Outlier Detection (ROCOD). We utilize a local and global behavioral model based on the relevant contexts, which is then integrated in a natural and robust fashion. We also present several optimizations to improve the scalability of the approach. We run ROCOD on both synthetic and real-world datasets and demonstrate that it outperforms other competitive baselines on the axes of efficacy and efficiency (40X speedup compared to modern contextual outlier detection methods). We also drill down and perform a fine-grained analysis to shed light on the rationale for the performance gains of ROCOD and reveal its effectiveness when handling objects with sparse contexts.) <|cite_end|>. <|cite_start|> (Reference: Feature bagging for outlier detection: Outlier detection has recently become an important problem in many industrial and financial applications. In this paper, a novel feature bagging approach for detecting outliers in very large, high dimensional and noisy databases is proposed. It combines results from multiple outlier detection algorithms that are applied using different set of features. Every outlier detection algorithm uses a small subset of features that are randomly selected from the original feature set. As a result, each outlier detector identifies different outliers, and thus assigns to all data records outlier scores that correspond to their probability of being outliers. The outlier scores computed by the individual outlier detection algorithms are then combined in order to find the better quality outliers. Experiments performed on several synthetic and real life data sets show that the proposed methods for combining outputs from multiple outlier detection algorithms provide non-trivial improvements over the base algorithm.) <|cite_end|> combines results from multiple outlier detectors, each of which apply only a subset of features. In contrast, each individual detector can subsamples data instances to form a ensemble of detectors <|cite_start|> (Reference: Subsampling for efficient and effective unsupervised outlier detection ensembles: Outlier detection and ensemble learning are well established research directions in data mining yet the application of ensemble techniques to outlier detection has been rarely studied. Here, we propose and study subsampling as a technique to induce diversity among individual outlier detectors. We show analytically and experimentally that an outlier detector based on a subsample per se, besides inducing diversity, can, under certain conditions, already improve upon the results of the same outlier detector on the complete dataset. Building an ensemble on top of several subsamples is further improving the results. While in the literature so far the intuition that ensembles improve over single outlier detectors has just been transferred from the classification literature, here we also justify analytically why ensembles are also expected to work in the unsupervised area of outlier detection. As a side effect, running an ensemble of several outlier detectors on subsamples of the dataset is more efficient than ensembles based on other means of introducing diversity and, depending on the sample rate and the size of the ensemble, can be even more efficient than just the single outlier detector on the complete data.) <|cite_end|>. Some recent work starts to realize the importance about the explanations of detection results. In heterogeneous network anomaly detection, <|cite_start|> (Reference: {On Community Outliers and Their Efficient Detection in Information Networks: Linked or networked data are ubiquitous in many applications. Examples include web data or hypertext documents connected via hyperlinks, social networks or user profiles connected via friend links, co-authorship and citation information, blog data, movie reviews and so on. In these datasets (called "information networks"), closely related objects that share the same properties or interests form a community. For example, a community in blogsphere could be users mostly interested in cell phone reviews and news. Outlier detection in information networks can reveal important anomalous and interesting behaviors that are not obvious if community information is ignored. An example could be a low-income person being friends with many rich people even though his income is not anomalously low when considered over the entire population. This paper first introduces the concept of community outliers (interesting points or rising stars for a more positive sense), and then shows that well-known baseline approaches without considering links or community information cannot find these community outliers. We propose an efficient solution by modeling networked data as a mixture model composed of multiple normal communities and a set of randomly generated outliers. The probabilistic model characterizes both data and links simultaneously by defining their joint distribution based on hidden Markov random fields (HMRF). Maximizing the data likelihood and the posterior of the model gives the solution to the outlier inference problem. We apply the model on both synthetic data and DBLP data sets, and the results demonstrate importance of this concept, as well as the effectiveness and efficiency of the proposed approach.) <|cite_end|> <|cite_start|> (Reference: {Focused Clustering and Outlier Detection in Large Attributed Graphs: Graph clustering and graph outlier detection have been studied extensively on plain graphs, with various applications. Recently, algorithms have been extended to graphs with attributes as often observed in the real-world. However, all of these techniques fail to incorporate the user preference into graph mining, and thus, lack the ability to steer algorithms to more interesting parts of the attributed graph. In this work, we overcome this limitation and introduce a novel user-oriented approach for mining attributed graphs. The key aspect of our approach is to infer user preference by the so-called focus attributes through a set of user-provided exemplar nodes. In this new problem setting, clusters and outliers are then simultaneously mined according to this user preference. Specifically, our FocusCO algorithm identifies the focus, extracts focused clusters and detects outliers. Moreover, FocusCO scales well with graph size, since we perform a local clustering of interest to the user rather than global partitioning of the entire graph. We show the effectiveness and scalability of our method on synthetic and real-world graphs, as compared to both existing graph clustering and outlier detection approaches.) <|cite_end|> <|cite_start|> (Reference: Accelerated Local Anomaly Detection via Resolving Attributed Networks: Attributed networks, in which network connectivity and node attributes are available, have been increasingly used to model real-world information systems, such as social media and e-commerce platforms. While outlier detection has been extensively studied to identify anomalies that deviate from certain chosen background, existing algorithms cannot be directly applied on attributed networks due to the heterogeneous types of information and the scale of real-world data. Meanwhile, it has been observed that local anomalies, which may align with global condition, are hard to be detected by existing algorithms with interpretability. Motivated by the observations, in this paper, we propose to study the problem of effective and efficient local anomaly detection in attributed networks. In particular, we design a collective way for modeling heterogeneous network and attribute information, and develop a novel and efficient distributed optimization algorithm to handle large-scale data. In the experiments, we compare the proposed framework with the state-of-the-art methods on both real and synthetic datasets, and demonstrate its effectiveness and efficiency through quantitative evaluation and case studies.) <|cite_end|> <|cite_start|> (Reference: Robust Contextual Outlier Detection: Where Context Meets Sparsity: Outlier detection is a fundamental data science task with applications ranging from data cleaning to network security. Given the fundamental nature of the task, this has been the subject of much research. Recently, a new class of outlier detection algorithms has emerged, called {\it contextual outlier detection}, and has shown improved performance when studying anomalous behavior in a specific context. However, as we point out in this article, such approaches have limited applicability in situations where the context is sparse (i.e. lacking a suitable frame of reference). Moreover, approaches developed to date do not scale to large datasets. To address these problems, here we propose a novel and robust approach alternative to the state-of-the-art called RObust Contextual Outlier Detection (ROCOD). We utilize a local and global behavioral model based on the relevant contexts, which is then integrated in a natural and robust fashion. We also present several optimizations to improve the scalability of the approach. We run ROCOD on both synthetic and real-world datasets and demonstrate that it outperforms other competitive baselines on the axes of efficacy and efficiency (40X speedup compared to modern contextual outlier detection methods). We also drill down and perform a fine-grained analysis to shed light on the rationale for the performance gains of ROCOD and reveal its effectiveness when handling objects with sparse contexts.) <|cite_end|> utilize attributes of nodes as auxiliary information for explaining the abnormality of resultant anomaly nodes. The motivation of this work is different from them, as we try to infer the reasons that why the given outliers are regarded as outlying, instead of developing new detection methods.
Besides algorithm development, researcher are also trying to provide explanations along with the approaches and their outcomes. The approach introduced in <|cite_start|> (Reference: Finding Intensional Knowledge of Distance-Based Outliers: ) <|cite_end|> can also find the subspace in which the features of outliers are exceptional. Ert\"{o}z \textit{et al.} designed a framework for detecting network intrusion with explainations, which only works on categorical attributes <|cite_start|> (Reference: Chapter 3 MINDS-Minnesota Intrusion Detection System: This paper introduces the Minnesota Intrusion Detection System (MINDS), which uses a suite of data mining techniques to automatically detect attacks against computer networks and systems. While the long-term objective of MINDS is to address all aspects of intrusion detection, this paper focuses on two specific contributions: (i) an unsupervised anomaly detection technique that assigns a score to each network connection that reflects how anomalous the connection is, and (ii) an association pattern analysis based module that summarizes those network connections that are ranked highly anomalous by the anomaly detection module. Experimental results on live network traffic at the University of Minnesota show that our anomaly detection techniques are very promising and are successful in automatically detecting several novel intrusions that could not be identified using popular signature-based tools such as SNORT. Furthermore, given the very high volume of connections observed per unit time, association pattern based summarization of novel attacks is quite useful in enabling a security analyst to understand and characterize emerging threats.) <|cite_end|>. The Bayesian program learning framework has been proposed for learning visual concepts that generalizes in a way similar to human, especially with just one or a few data examples <|cite_start|> (Reference: {Human-level concept learning through probabilistic program
induction: Handwritten characters drawn by a model Not only do children learn effortlessly, they do so quickly and with a remarkable ability to use what they have learned as the raw material for creating new stuff. Lake et al. describe a computational model that learns in a similar fashion and does so better than current deep learning algorithms. The model classifies, parses, and recreates handwritten characters, and can generate new letters of the alphabet that look “right” as judged by Turing-like tests of the model's output in comparison to what real humans produce. Science, this issue p. 1332 Combining the capacity to handle noise with probabilistic learning yields humanlike performance in a computational model. People learning new concepts can often generalize successfully from just a single example, yet machine learning algorithms typically require tens or hundreds of examples to perform with similar accuracy. People can also use learned concepts in richer ways than conventional algorithms—for action, imagination, and explanation. We present a computational model that captures these human learning abilities for a large class of simple visual concepts: handwritten characters from the world’s alphabets. The model represents concepts as simple programs that best explain observed examples under a Bayesian criterion. On a challenging one-shot classification task, the model achieves human-level performance while outperforming recent deep learning approaches. We also present several “visual Turing tests” probing the model’s creative generalization abilities, which in many cases are indistinguishable from human behavior.) <|cite_end|>. Interpretations for anomalies detection can be naturally achieved within the scenario of attributed networks <|cite_start|> (Reference: {On Community Outliers and Their Efficient Detection in Information Networks: Linked or networked data are ubiquitous in many applications. Examples include web data or hypertext documents connected via hyperlinks, social networks or user profiles connected via friend links, co-authorship and citation information, blog data, movie reviews and so on. In these datasets (called "information networks"), closely related objects that share the same properties or interests form a community. For example, a community in blogsphere could be users mostly interested in cell phone reviews and news. Outlier detection in information networks can reveal important anomalous and interesting behaviors that are not obvious if community information is ignored. An example could be a low-income person being friends with many rich people even though his income is not anomalously low when considered over the entire population. This paper first introduces the concept of community outliers (interesting points or rising stars for a more positive sense), and then shows that well-known baseline approaches without considering links or community information cannot find these community outliers. We propose an efficient solution by modeling networked data as a mixture model composed of multiple normal communities and a set of randomly generated outliers. The probabilistic model characterizes both data and links simultaneously by defining their joint distribution based on hidden Markov random fields (HMRF). Maximizing the data likelihood and the posterior of the model gives the solution to the outlier inference problem. We apply the model on both synthetic data and DBLP data sets, and the results demonstrate importance of this concept, as well as the effectiveness and efficiency of the proposed approach.) <|cite_end|> <|cite_start|> (Reference: Accelerated Local Anomaly Detection via Resolving Attributed Networks: Attributed networks, in which network connectivity and node attributes are available, have been increasingly used to model real-world information systems, such as social media and e-commerce platforms. While outlier detection has been extensively studied to identify anomalies that deviate from certain chosen background, existing algorithms cannot be directly applied on attributed networks due to the heterogeneous types of information and the scale of real-world data. Meanwhile, it has been observed that local anomalies, which may align with global condition, are hard to be detected by existing algorithms with interpretability. Motivated by the observations, in this paper, we propose to study the problem of effective and efficient local anomaly detection in attributed networks. In particular, we design a collective way for modeling heterogeneous network and attribute information, and develop a novel and efficient distributed optimization algorithm to handle large-scale data. In the experiments, we compare the proposed framework with the state-of-the-art methods on both real and synthetic datasets, and demonstrate its effectiveness and efficiency through quantitative evaluation and case studies.) <|cite_end|> <|cite_start|> (Reference: {Focused Clustering and Outlier Detection in Large Attributed Graphs: Graph clustering and graph outlier detection have been studied extensively on plain graphs, with various applications. Recently, algorithms have been extended to graphs with attributes as often observed in the real-world. However, all of these techniques fail to incorporate the user preference into graph mining, and thus, lack the ability to steer algorithms to more interesting parts of the attributed graph. In this work, we overcome this limitation and introduce a novel user-oriented approach for mining attributed graphs. The key aspect of our approach is to infer user preference by the so-called focus attributes through a set of user-provided exemplar nodes. In this new problem setting, clusters and outliers are then simultaneously mined according to this user preference. Specifically, our FocusCO algorithm identifies the focus, extracts focused clusters and detects outliers. Moreover, FocusCO scales well with graph size, since we perform a local clustering of interest to the user rather than global partitioning of the entire graph. We show the effectiveness and scalability of our method on synthetic and real-world graphs, as compared to both existing graph clustering and outlier detection approaches.) <|cite_end|>. These techniques cannot be directly applied to solve our problem, because: (1) Heterogenous information may not be available; (2) In many cases, features are not designed for achieving specific tasks; (3) The definition of anomalies varies in the work above, so a more general interpretation approach is still needed. Moreover, given the black-box characteristics of major mathematical models, the community is exploring ways to interprete the mechanisms that support the model, as well as the rules according to which the predictions are made. Ribeiro \textit{et al.} developed a model-agnostic framework that infers explanations by approximating local input-output behavior of the original supervised learning model <|cite_start|> (Reference: "Why Should I Trust You?": Explaining the Predictions of Any Classifier: Despite widespread adoption, machine learning models remain mostly black boxes. Understanding the reasons behind predictions is, however, quite important in assessing trust, which is fundamental if one plans to take action based on a prediction, or when choosing whether to deploy a new model. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. We demonstrate the flexibility of these methods by explaining different models for text (e.g. random forests) and image classification (e.g. neural networks). We show the utility of explanations via novel experiments, both simulated and with human subjects, on various scenarios that require trust: deciding if one should trust a prediction, choosing between models, improving an untrustworthy classifier, and identifying why a classifier should not be trusted.) <|cite_end|>. Lakkaraju \textit{et al.} formalizes decision set learning which can generate short, succinct and non-overlapping rules for classification tasks <|cite_start|> (Reference: Interpretable decision sets: A joint framework for description and prediction: One of the most important obstacles to deploying predictive models is the fact that humans do not understand and trust them. Knowing which variables are important in a model's prediction and how they are combined can be very powerful in helping people understand and trust automatic decision making systems. Here we propose interpretable decision sets, a framework for building predictive models that are highly accurate, yet also highly interpretable. Decision sets are sets of independent if-then rules. Because each rule can be applied independently, decision sets are simple, concise, and easily interpretable. We formalize decision set learning through an objective function that simultaneously optimizes accuracy and interpretability of the rules. In particular, our approach learns short, accurate, and non-overlapping rules that cover the whole feature space and pay attention to small but important classes. Moreover, we prove that our objective is a non-monotone submodular function, which we efficiently optimize to find a near-optimal set of rules. Experiments show that interpretable decision sets are as accurate at classification as state-of-the-art machine learning techniques. They are also three times smaller on average than rule-based models learned by other methods. Finally, results of a user study show that people are able to answer multiple-choice questions about the decision boundaries of interpretable decision sets and write descriptions of classes based on them faster and more accurately than with other rule-based models that were designed for interpretability. Overall, our framework provides a new approach to interpretable machine learning that balances accuracy, interpretability, and computational efficiency.) <|cite_end|>. Micenkov\'{a} \textit{et al.} proposed to use classification models and feature selection methods to provide interpretations to the outliers in the subspace <|cite_start|> (Reference: Explaining outliers by subspace separability: Outliers are extraordinary objects in a data collection. Depending on the domain, they may represent errors, fraudulent activities or rare events that are subject of our interest. Existing approaches focus on detection of outliers or degrees of outlierness (ranking), but do not provide a possible explanation of how these objects deviate from the rest of the data. Such explanations would help user to interpret or validate the detected outliers. The problem addressed in this paper is as follows: given an outlier detected by an existing algorithm, we propose a method that determines possible explanations for the outlier. These explanations are expressed in the form of subspaces in which the given outlier shows separability from the inliers. In this manner, our proposed method complements existing outlier detection algorithms by providing additional information about the outliers. Our method is designed to work with any existing outlier detection algorithm and it also includes a heuristic that gives a substantial speedup over the baseline strategy.) <|cite_end|>. Vinh \textit{et al.} utilize the isolation property of outliers and apply isolation forest for outlying aspects discovery <|cite_start|> (Reference: Discovering outlying aspects in large datasets: ) <|cite_end|>. <|paper_end|> | [
"<|reference_start|> Finding Intensional Knowledge of Distance-Based Outliers: <|reference_end|>",
"<|reference_start|> Dolphin: an efficient algorithm for mining distance-based outliers in very large datasets: In this work a novel distance-based outlier detection algorithm, named DOLPHIN, working on disk-resident datasets and whose I/O cost corresponds to the cost of sequentially reading the input dataset file twice, is presented.\n It is both theoretically and empirically shown that the main memory usage of DOLPHIN amounts to a small fraction of the dataset and that DOLPHIN has linear time performance with respect to the dataset size. DOLPHIN gains efficiency by naturally merging together in a unified schema three strategies, namely the selection policy of objects to be maintained in main memory, usage of pruning rules, and similarity search techniques. Importantly, similarity search is accomplished by the algorithm without the need of preliminarily indexing the whole dataset, as other methods do.\n The algorithm is simple to implement and it can be used with any type of data, belonging to either metric or nonmetric spaces. Moreover, a modification to the basic method allows DOLPHIN to deal with the scenario in which the available buffer of main memory is smaller than its standard requirements. DOLPHIN has been compared with state-of-the-art distance-based outlier detection algorithms, showing that it is much more efficient. <|reference_end|>",
"<|reference_start|> Estimating the Support of a High-dimensional Distribution: Suppose you are given some data set drawn from an underlying probability distribution P and you want to estimate a simple subset S of input space such that the probability that a test point drawn from P lies outside of S equals some a priori specified value between 0 and 1. We propose a method to approach this problem by trying to estimate a function f that is positive on S and negative on the complement. The functional form of f is given by a kernel expansion in terms of a potentially small subset of the training data; it is regularized by controlling the length of the weight vector in an associated feature space. The expansion coefficients are found by solving a quadratic programming problem, which we do by carrying out sequential optimization over pairs of input patterns. We also provide a theoretical analysis of the statistical performance of our algorithm. The algorithm is a natural extension of the support vector algorithm to the case of unlabeled data. <|reference_end|>",
"<|reference_start|> Subsampling for efficient and effective unsupervised outlier detection ensembles: Outlier detection and ensemble learning are well established research directions in data mining yet the application of ensemble techniques to outlier detection has been rarely studied. Here, we propose and study subsampling as a technique to induce diversity among individual outlier detectors. We show analytically and experimentally that an outlier detector based on a subsample per se, besides inducing diversity, can, under certain conditions, already improve upon the results of the same outlier detector on the complete dataset. Building an ensemble on top of several subsamples is further improving the results. While in the literature so far the intuition that ensembles improve over single outlier detectors has just been transferred from the classification literature, here we also justify analytically why ensembles are also expected to work in the unsupervised area of outlier detection. As a side effect, running an ensemble of several outlier detectors on subsamples of the dataset is more efficient than ensembles based on other means of introducing diversity and, depending on the sample rate and the size of the ensemble, can be even more efficient than just the single outlier detector on the complete data. <|reference_end|>"
] | [
16,
25,
27,
40
] | {"<|multi_cite_1_1|>": "ss-888850", "<|multi_cite_1_2|>": "ss-944654", "<|multi_cite_3_1|>": "ss-942285", "<|multi_cite_3_2|>": "arxiv-107196", "<|multi_cite_3_3|>": "ss-796008", "<|cite_4|>": "ss-1253423", "<|multi_cite_5_1|>": "ss-1463345", "<|multi_cite_5_2|>": "ss-853136", "<|multi_cite_6_2|>": "ss-774373", "<|cite_7|>": "arxiv-16381", "<|cite_8|>": "ss-997276", "<|multi_cite_9_1|>": "ss-1364041", "<|multi_cite_9_2|>": "ss-1364042", "<|multi_cite_9_3|>": "ss-2273739", "<|multi_cite_9_4|>": "ss-2273740", "<|cite_10|>": "ss-2273740", "<|cite_11|>": "ss-1364041", "<|multi_cite_12_1|>": "ss-682858", "<|multi_cite_12_2|>": "ss-682861", "<|multi_cite_12_3|>": "ss-1263443", "<|multi_cite_12_4|>": "ss-1345675", "<|multi_cite_12_5|>": "ss-774380", "<|multi_cite_13_1|>": "ss-808008", "<|multi_cite_13_2|>": "ss-1096526", "<|multi_cite_13_3|>": "ss-981253", "<|multi_cite_13_4|>": "ss-1816560", "<|multi_cite_13_5|>": "ss-854010", "<|multi_cite_14_1|>": "ss-703762", "<|multi_cite_14_2|>": "ss-682859", "<|multi_cite_14_3|>": "ss-886672", "<|multi_cite_15_1|>": "ss-682861", "<|multi_cite_15_2|>": "ss-1475824", "<|multi_cite_15_3|>": "ss-1008515", "<|multi_cite_16_1|>": "ss-1096526", "<|multi_cite_16_2|>": "ss-1816560", "<|multi_cite_17_1|>": "ss-774380", "<|multi_cite_17_2|>": "arxiv-90989", "<|multi_cite_18_1|>": "ss-1280359", "<|multi_cite_18_2|>": "arxiv-102962", "<|cite_19|>": "ss-1280357", "<|cite_20|>": "ss-1280359", "<|multi_cite_21_1|>": "ss-774380", "<|multi_cite_21_2|>": "ss-1230907", "<|multi_cite_21_3|>": "ss-1837723", "<|multi_cite_21_4|>": "arxiv-102962", "<|cite_22|>": "ss-1364041", "<|cite_23|>": "ss-1444771", "<|cite_24|>": "ss-1098495", "<|multi_cite_25_1|>": "ss-774380", "<|multi_cite_25_2|>": "ss-1837723", "<|multi_cite_25_3|>": "ss-1230907", "<|cite_26|>": "arxiv-92295", "<|cite_27|>": "ss-1518537", "<|cite_28|>": "ss-1364042", "<|cite_29|>": "ss-2273740"} |
1911.13294 | <|paper_start|> Title: Classification of distributed binary labeling problems
Abstract: Classification of distributed binary labeling problems: We present a complete classification of the deterministic distributed time complexity for a family of graph problems: binary labeling problems in trees. These are locally checkable problems that can be encoded with an alphabet of size two in the edge labeling formalism. Examples of binary labeling problems include sinkless orientation, sinkless and sourceless orientation, 2-vertex coloring, perfect matching, and the task of coloring edges red and blue such that all nodes are incident to at least one red and at least one blue edge. More generally, we can encode e.g. any cardinality constraints on indegrees and outdegrees. We study the deterministic time complexity of solving a given binary labeling problem in trees, in the usual LOCAL model of distributed computing. We show that the complexity of any such problem is in one of the following classes: $O(1)$, $\Theta(\log n)$, $\Theta(n)$, or unsolvable. In particular, a problem that can be represented in the binary labeling formalism cannot have time complexity $\Theta(\log^* n)$, and hence we know that e.g. any encoding of maximal matchings has to use at least three labels (which is tight). Furthermore, given the description of any binary labeling problem, we can easily determine in which of the four classes it is and what is an asymptotically optimal algorithm for solving it. Hence the distributed time complexity of binary labeling problems is decidable, not only in principle, but also in practice: there is a simple and efficient algorithm that takes the description of a binary labeling problem and outputs its distributed time complexity.
Introduction
This work presents a complete classification of the deterministic distributed time complexity for a family of distributed graph problems: \emph{binary labeling problems} in trees. These are a special case of widely-studied locally checkable labeling problems <|cite_start|> (Reference: What Can Be Computed Locally?: The purpose of this paper is a study of computation that can be done locally in a distributed network, where "locally" means within time (or distance) independent of the size of the network. Locally checkable labeling (LCL) problems are considered, where the legality of a labeling can be checked locally (e.g., coloring). The results include the following:
There are nontrivial LCL problems that have local algorithms.
There is a variant of the dining philosophers problem that can be solved locally.
Randomization cannot make an LCL problem local; i.e., if a problem has a local randomized algorithm then it has a local deterministic algorithm.
It is undecidable, in general, whether a given LCL has a local algorithm.
However, it is decidable whether a given LCL has an algorithm that operates in a given time $t$.
Any LCL problem that has a local algorithm has one that is order-invariant (the algorithm depends only on the order of the processor IDs).) <|cite_end|>. The defining property of a binary labeling problem is that it can be encoded with an \emph{alphabet of size two} in the \emph{edge labeling formalism}, which is a modern representation for locally checkable graph problems <|cite_start|> (Reference: Lower bounds for maximal matchings and maximal independent sets: There are distributed graph algorithms for finding maximal matchings and maximal independent sets in $O(\Delta + \log^* n)$ communication rounds; here $n$ is the number of nodes and $\Delta$ is the maximum degree. The lower bound by Linial (1987, 1992) shows that the dependency on $n$ is optimal: these problems cannot be solved in $o(\log^* n)$ rounds even if $\Delta = 2$. However, the dependency on $\Delta$ is a long-standing open question, and there is currently an exponential gap between the upper and lower bounds. We prove that the upper bounds are tight. We show that any algorithm that finds a maximal matching or maximal independent set with probability at least $1-1/n$ requires $\Omega(\min\{\Delta,\log \log n / \log \log \log n\})$ rounds in the LOCAL model of distributed computing. As a corollary, it follows that any deterministic algorithm that finds a maximal matching or maximal independent set requires $\Omega(\min\{\Delta, \log n / \log \log n\})$ rounds; this is an improvement over prior lower bounds also as a function of $n$.) <|cite_end|> <|cite_start|> (Reference: An Automatic Speedup Theorem for Distributed Problems: Recently, Brandt et al. [STOC'16] proved a lower bound for the distributed Lov\'asz Local Lemma, which has been conjectured to be tight for sufficiently relaxed LLL criteria by Chang and Pettie [FOCS'17]. At the heart of their result lies a speedup technique that, for graphs of girth at least $2t+2$, transforms any $t$-round algorithm for one specific LLL problem into a $(t-1)$-round algorithm for the same problem. We substantially improve on this technique by showing that such a speedup exists for any locally checkable problem $\Pi$, with the difference that the problem $\Pi_1$ the inferred $(t-1)$-round algorithm solves is not (necessarily) the same problem as $\Pi$. Our speedup is automatic in the sense that there is a fixed procedure that transforms a description for $\Pi$ into a description for $\Pi_1$ and reversible in the sense that any $(t-1)$-round algorithm for $\Pi_1$ can be transformed into a $t$-round algorithm for $\Pi$. In particular, for any locally checkable problem $\Pi$ with exact deterministic time complexity $T(n, \Delta) \leq t$ on graphs with $n$ nodes, maximum node degree $\Delta$, and girth at least $2t+2$, there is a sequence of problems $\Pi_1, \Pi_2, \dots$ with time complexities $T(n, \Delta)-1, T(n, \Delta)-2, \dots$, that can be inferred from $\Pi$. As a first application of our generalized speedup, we solve a long-standing open problem of Naor and Stockmeyer [STOC'93]: we show that weak $2$-coloring in odd-degree graphs cannot be solved in $o(\log^* \Delta)$ rounds, thereby providing a matching lower bound to their upper bound.) <|cite_end|> <|cite_start|> (Reference: Brief Announcement: Round eliminator: a tool for automatic speedup simulation: In the last years, the round elimination technique has been successfully used to prove many lower bounds for the LOCAL model of distributed computing. In 2019, Brandt proved that this technique can be theoretically automated: given a locally checkable problem Π that can be solved in T rounds of communication, it is possible to mechanically define a problem Π′ that requires T − 1 rounds, and by repeating this procedure many times one can obtain interesting lower and upper bounds. In this work, we show that this technique can be automated also in practice: round eliminator is a computer program where we can feed our favorite locally checkable problem and obtain lower (and sometimes upper) bounds for it, automatically.) <|cite_end|>; we will give the precise definition in Section~\ref{ssec:binary-labeling-problems}.
\paragraph{Contributions.}
In this work, we focus on \emph{deterministic} distributed algorithms in the LOCAL model of distributed computing, and we study the computational complexity of solving a binary labeling problem in \emph{trees}. It is easy to see that there are binary labeling problems that fall in each of the following classes:
\begin{itemize}[noitemsep]
\item Trivial problems, solvable in $O(1)$ rounds.
\item Problems similar to sinkless orientation, solvable in $\Theta(\log n)$ rounds <|cite_start|> (Reference: A Lower Bound for the Distributed Lov\'asz Local Lemma: We show that any randomised Monte Carlo distributed algorithm for the Lov\'asz local lemma requires $\Omega(\log \log n)$ communication rounds, assuming that it finds a correct assignment with high probability. Our result holds even in the special case of $d = O(1)$, where $d$ is the maximum degree of the dependency graph. By prior work, there are distributed algorithms for the Lov\'asz local lemma with a running time of $O(\log n)$ rounds in bounded-degree graphs, and the best lower bound before our work was $\Omega(\log^* n)$ rounds [Chung et al. 2014].) <|cite_end|> <|cite_start|> (Reference: An Exponential Separation Between Randomized and Deterministic Complexity in the LOCAL Model: Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring. For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm. In this paper we prove that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model. Each result has a very compelling take-away message: 1. Fast $\Delta$-coloring of trees requires random bits: Building on the recent lower bounds of Brandt et al., we prove that the randomized complexity of $\Delta$-coloring a tree with maximum degree $\Delta\ge 55$ is $\Theta(\log_\Delta\log n)$, whereas its deterministic complexity is $\Theta(\log_\Delta n)$ for any $\Delta\ge 3$. This also establishes a large separation between the deterministic complexity of $\Delta$-coloring and $(\Delta+1)$-coloring trees. 2. Randomized lower bounds imply deterministic lower bounds: We prove that any deterministic algorithm for a natural class of problems that runs in $O(1)+o(\log_\Delta n)$ rounds can be transformed to run in $O(\log^*n-\log^*\Delta+1)$ rounds. If the transformed algorithm violates a lower bound (even allowing randomization), then one can conclude that the problem requires $\Omega(\log_\Delta n)$ time deterministically. 3. Deterministic lower bounds imply randomized lower bounds: We prove that the randomized complexity of any natural problem on instances of size $n$ is at least its deterministic complexity on instances of size $\sqrt{\log n}$. This shows that a deterministic $\Omega(\log_\Delta n)$ lower bound for any problem implies a randomized $\Omega(\log_\Delta\log n)$ lower bound. It also illustrates that the graph shattering technique is absolutely essential to the LOCAL model.) <|cite_end|> <|cite_start|> (Reference: Distributed Degree Splitting, Edge Coloring, and Orientations: We study a family of closely-related distributed graph problems, which we call degree splitting, where roughly speaking the objective is to partition (or orient) the edges such that each node's degree is split almost uniformly. Our findings lead to answers for a number of problems, a sampling of which includes: -- We present a $poly(\log n)$ round deterministic algorithm for $(2\Delta-1)\cdot (1+o(1))$-edge-coloring, where $\Delta$ denotes the maximum degree. Modulo the $1+o(1)$ factor, this settles one of the long-standing open problems of the area from the 1990's (see e.g. Panconesi and Srinivasan [PODC'92]). Indeed, a weaker requirement of $(2\Delta-1)\cdot poly(\log \Delta)$-edge-coloring in $poly(\log n)$ rounds was asked for in the 4th open question in the Distributed Graph Coloring book by Barenboim and Elkin. -- We show that sinkless orientation---i.e., orienting edges such that each node has at least one outgoing edge---on $\Delta$-regular graphs can be solved in $O(\log_{\Delta} \log n)$ rounds randomized and in $O(\log_{\Delta} n)$ rounds deterministically. These prove the corresponding lower bounds by Brandt et al. [STOC'16] and Chang, Kopelowitz, and Pettie [FOCS'16] to be tight. Moreover, these show that sinkless orientation exhibits an exponential separation between its randomized and deterministic complexities, akin to the results of Chang et al. for $\Delta$-coloring $\Delta$-regular trees. -- We present a randomized $O(\log^4 n)$ round algorithm for orienting $a$-arboricity graphs with maximum out-degree $a(1+\epsilon)$. This can be also turned into a decomposition into $a (1+\epsilon)$ forests when $a=\Omega(\log n)$ and into $a (1+\epsilon)$ pseduo-forests when $a=o(\log n)$. Obtaining an efficient distributed decomposition into less than $2a$ forests was stated as the 10th open problem in the book by Barenboim and Elkin.) <|cite_end|>.
\item Global problems, requiring $\Theta(n)$ rounds.
\item Unsolvable problems.
\end{itemize}
We show that this is a \emph{complete} list of all possible complexities. In particular, there are no binary labeling problems of complexities such as $\Theta(\log^* n)$ or $\Theta(\sqrt{n})$. For example, maximal matching is a problem very similar in spirit to binary labeling problems, it has a complexity $\Theta(\log^* n)$ in bounded-degree graphs <|cite_start|> (Reference: Locality in Distributed Graph Algorithms: This paper concerns a number of algorithmic problems on graphs and how they may be solved in a distributed fashion. The computational model is such that each node of the graph is occupied by a processor which has its own ID. Processors are restricted to collecting data from others which are at a distance at most t away from them in t time units, but are otherwise computationally unbounded. This model focuses on the issue of locality in distributed processing, namely, to what extent a global solution to a computational problem can be obtained from locally available data.Three results are proved within this model: • A 3-coloring of an n-cycle requires time $\Omega (\log ^ * n)$. This bound is tight, by previous work of Cole and Vishkin. • Any algorithm for coloring the d-regular tree of radius r which runs for time at most $2r/3$ requires at least $\Omega (\sqrt d )$ colors. • In an n-vertex graph of largest degree $\Delta $, an $O(\Delta ^2 )$-coloring may be found in time $O(\log ^ * n)$.) <|cite_end|> <|cite_start|> (Reference: Deterministic Coin Tossing with Applications to Optimal Parallel List Ranking: ) <|cite_end|>, and it can be encoded in the edge labeling formalism using an alphabet of size three <|cite_start|> (Reference: Lower bounds for maximal matchings and maximal independent sets: There are distributed graph algorithms for finding maximal matchings and maximal independent sets in $O(\Delta + \log^* n)$ communication rounds; here $n$ is the number of nodes and $\Delta$ is the maximum degree. The lower bound by Linial (1987, 1992) shows that the dependency on $n$ is optimal: these problems cannot be solved in $o(\log^* n)$ rounds even if $\Delta = 2$. However, the dependency on $\Delta$ is a long-standing open question, and there is currently an exponential gap between the upper and lower bounds. We prove that the upper bounds are tight. We show that any algorithm that finds a maximal matching or maximal independent set with probability at least $1-1/n$ requires $\Omega(\min\{\Delta,\log \log n / \log \log \log n\})$ rounds in the LOCAL model of distributed computing. As a corollary, it follows that any deterministic algorithm that finds a maximal matching or maximal independent set requires $\Omega(\min\{\Delta, \log n / \log \log n\})$ rounds; this is an improvement over prior lower bounds also as a function of $n$.) <|cite_end|>---our work shows that three labels are also necessary for all problems in this complexity class.
Moreover, using our results one can easily determine the complexity class of any given binary labeling problem. We give a simple, concise characterization of all binary labeling problems for classes $O(1)$, $\Theta(n)$, and unsolvable, and we show that all other problems belong to class $\Theta(\log n)$. Hence the deterministic distributed time complexity of a binary labeling problem is \emph{decidable}, not only in theory but also \emph{in practice}: given the description of any binary labeling problem, a human being or a computer can easily find out the distributed computational complexity of the problem, as well as an asymptotically optimal algorithm for solving the problem. Our classification of all binary labeling problems is presented in Table~\ref{tab:deterministic}, and given any binary labeling problem $\Pi$, one can simply do mechanical pattern matching to find its complexity class in this table.
Our work also sheds new light on the \emph{automatic round elimination technique} <|cite_start|> (Reference: An Automatic Speedup Theorem for Distributed Problems: Recently, Brandt et al. [STOC'16] proved a lower bound for the distributed Lov\'asz Local Lemma, which has been conjectured to be tight for sufficiently relaxed LLL criteria by Chang and Pettie [FOCS'17]. At the heart of their result lies a speedup technique that, for graphs of girth at least $2t+2$, transforms any $t$-round algorithm for one specific LLL problem into a $(t-1)$-round algorithm for the same problem. We substantially improve on this technique by showing that such a speedup exists for any locally checkable problem $\Pi$, with the difference that the problem $\Pi_1$ the inferred $(t-1)$-round algorithm solves is not (necessarily) the same problem as $\Pi$. Our speedup is automatic in the sense that there is a fixed procedure that transforms a description for $\Pi$ into a description for $\Pi_1$ and reversible in the sense that any $(t-1)$-round algorithm for $\Pi_1$ can be transformed into a $t$-round algorithm for $\Pi$. In particular, for any locally checkable problem $\Pi$ with exact deterministic time complexity $T(n, \Delta) \leq t$ on graphs with $n$ nodes, maximum node degree $\Delta$, and girth at least $2t+2$, there is a sequence of problems $\Pi_1, \Pi_2, \dots$ with time complexities $T(n, \Delta)-1, T(n, \Delta)-2, \dots$, that can be inferred from $\Pi$. As a first application of our generalized speedup, we solve a long-standing open problem of Naor and Stockmeyer [STOC'93]: we show that weak $2$-coloring in odd-degree graphs cannot be solved in $o(\log^* \Delta)$ rounds, thereby providing a matching lower bound to their upper bound.) <|cite_end|> <|cite_start|> (Reference: Brief Announcement: Round eliminator: a tool for automatic speedup simulation: In the last years, the round elimination technique has been successfully used to prove many lower bounds for the LOCAL model of distributed computing. In 2019, Brandt proved that this technique can be theoretically automated: given a locally checkable problem Π that can be solved in T rounds of communication, it is possible to mechanically define a problem Π′ that requires T − 1 rounds, and by repeating this procedure many times one can obtain interesting lower and upper bounds. In this work, we show that this technique can be automated also in practice: round eliminator is a computer program where we can feed our favorite locally checkable problem and obtain lower (and sometimes upper) bounds for it, automatically.) <|cite_end|>. Previously, it was known that sinkless orientation is a nontrivial \emph{fixed point} for round elimination <|cite_start|> (Reference: A Lower Bound for the Distributed Lov\'asz Local Lemma: We show that any randomised Monte Carlo distributed algorithm for the Lov\'asz local lemma requires $\Omega(\log \log n)$ communication rounds, assuming that it finds a correct assignment with high probability. Our result holds even in the special case of $d = O(1)$, where $d$ is the maximum degree of the dependency graph. By prior work, there are distributed algorithms for the Lov\'asz local lemma with a running time of $O(\log n)$ rounds in bounded-degree graphs, and the best lower bound before our work was $\Omega(\log^* n)$ rounds [Chung et al. 2014].) <|cite_end|>---such fixed points are very helpful for lower bound proofs, but little was known about the existence of other nontrivial fixed points. Our classification of binary labeling problems in this work led to the discovery of new nontrivial fixed points---this will hopefully pave the way for the development of a theoretical framework that enables us to understand when round elimination leads to fixed points and why.
\paragraph{Open questions.}
The main open question that we leave for future work is extending the characterization to randomized distributed algorithms: some binary labeling problems can be solved in $\Theta(\log \log n)$ rounds with randomized algorithms, but it is not yet known exactly which binary labeling problems belong to this class. Our work takes the first steps towards developing such a classification.
\paragraph{Structure.}
We start with a brief discussion of the general landscape of distributed computational complexity in Section~\ref{sec:background}, and then give formal definitions of binary labeling problems in Section~\ref{ssec:binary-labeling-problems}. We present a summary of our results in Section~\ref{sec:contributions}---our main contribution is the characterization of all binary labeling problems in Table~\ref{tab:deterministic}. We explain the details of the model of computing in Section~\ref{sec:model}. All algorithms and lower bound proofs related to deterministic complexity are presented in Sections \ref{sec:unsol}--\ref{sec:log-lower}:
\begin{itemize}[noitemsep]
\item Section~\ref{sec:unsol}: unsolvable problems,
\item Section~\ref{sec:constant}: complexity class $O(1)$,
\item Section~\ref{sec:global}: complexity class $\Theta(n)$,
\item Section~\ref{sec:log-upper}: $O(\log n)$ upper bounds,
\item Section~\ref{sec:log-lower}: $\Omega(\log n)$ lower bounds.
\end{itemize}
We conclude with a discussion of randomized complexity in Section~\ref{sec:rand}. <|paper_end|> | [
"<|reference_start|> An Exponential Separation Between Randomized and Deterministic Complexity in the LOCAL Model: Over the past 30 years numerous algorithms have been designed for symmetry breaking problems in the LOCAL model, such as maximal matching, MIS, vertex coloring, and edge-coloring. For most problems the best randomized algorithm is at least exponentially faster than the best deterministic algorithm. In this paper we prove that these exponential gaps are necessary and establish connections between the deterministic and randomized complexities in the LOCAL model. Each result has a very compelling take-away message: 1. Fast $\\Delta$-coloring of trees requires random bits: Building on the recent lower bounds of Brandt et al., we prove that the randomized complexity of $\\Delta$-coloring a tree with maximum degree $\\Delta\\ge 55$ is $\\Theta(\\log_\\Delta\\log n)$, whereas its deterministic complexity is $\\Theta(\\log_\\Delta n)$ for any $\\Delta\\ge 3$. This also establishes a large separation between the deterministic complexity of $\\Delta$-coloring and $(\\Delta+1)$-coloring trees. 2. Randomized lower bounds imply deterministic lower bounds: We prove that any deterministic algorithm for a natural class of problems that runs in $O(1)+o(\\log_\\Delta n)$ rounds can be transformed to run in $O(\\log^*n-\\log^*\\Delta+1)$ rounds. If the transformed algorithm violates a lower bound (even allowing randomization), then one can conclude that the problem requires $\\Omega(\\log_\\Delta n)$ time deterministically. 3. Deterministic lower bounds imply randomized lower bounds: We prove that the randomized complexity of any natural problem on instances of size $n$ is at least its deterministic complexity on instances of size $\\sqrt{\\log n}$. This shows that a deterministic $\\Omega(\\log_\\Delta n)$ lower bound for any problem implies a randomized $\\Omega(\\log_\\Delta\\log n)$ lower bound. It also illustrates that the graph shattering technique is absolutely essential to the LOCAL model. <|reference_end|>",
"<|reference_start|> Deterministic Coin Tossing with Applications to Optimal Parallel List Ranking: <|reference_end|>",
"<|reference_start|> Lower bounds for maximal matchings and maximal independent sets: There are distributed graph algorithms for finding maximal matchings and maximal independent sets in $O(\\Delta + \\log^* n)$ communication rounds; here $n$ is the number of nodes and $\\Delta$ is the maximum degree. The lower bound by Linial (1987, 1992) shows that the dependency on $n$ is optimal: these problems cannot be solved in $o(\\log^* n)$ rounds even if $\\Delta = 2$. However, the dependency on $\\Delta$ is a long-standing open question, and there is currently an exponential gap between the upper and lower bounds. We prove that the upper bounds are tight. We show that any algorithm that finds a maximal matching or maximal independent set with probability at least $1-1/n$ requires $\\Omega(\\min\\{\\Delta,\\log \\log n / \\log \\log \\log n\\})$ rounds in the LOCAL model of distributed computing. As a corollary, it follows that any deterministic algorithm that finds a maximal matching or maximal independent set requires $\\Omega(\\min\\{\\Delta, \\log n / \\log \\log n\\})$ rounds; this is an improvement over prior lower bounds also as a function of $n$. <|reference_end|>",
"<|reference_start|> Brief Announcement: Round eliminator: a tool for automatic speedup simulation: In the last years, the round elimination technique has been successfully used to prove many lower bounds for the LOCAL model of distributed computing. In 2019, Brandt proved that this technique can be theoretically automated: given a locally checkable problem Π that can be solved in T rounds of communication, it is possible to mechanically define a problem Π′ that requires T − 1 rounds, and by repeating this procedure many times one can obtain interesting lower and upper bounds. In this work, we show that this technique can be automated also in practice: round eliminator is a computer program where we can feed our favorite locally checkable problem and obtain lower (and sometimes upper) bounds for it, automatically. <|reference_end|>"
] | [
5,
8,
9,
11
] | {"<|cite_1|>": "ss-785022", "<|multi_cite_2_1|>": "arxiv-186940", "<|multi_cite_2_2|>": "arxiv-193005", "<|multi_cite_2_3|>": "ss-1396727", "<|multi_cite_7_1|>": "arxiv-86484", "<|multi_cite_7_2|>": "arxiv-92953", "<|multi_cite_7_3|>": "arxiv-103724", "<|multi_cite_3_1|>": "ss-1516866", "<|multi_cite_3_2|>": "ss-798526", "<|cite_4|>": "arxiv-186940", "<|multi_cite_5_1|>": "arxiv-193005", "<|multi_cite_5_2|>": "ss-1396727", "<|cite_6|>": "arxiv-86484"} |
2406.07126 | <|paper_start|> Title: Logical Distillation of Graph Neural Networks
Abstract: Logical Distillation of Graph Neural Networks: We present a logic based interpretable model for learning on graphs and an algorithm to distill this model from a Graph Neural Network (GNN). Recent results have shown connections between the expressivity of GNNs and the two-variable fragment of first-order logic with counting quantifiers (C2). We introduce a decision-tree based model which leverages an extension of C2 to distill interpretable logical classifiers from GNNs. We test our approach on multiple GNN architectures. The distilled models are interpretable, succinct, and attain similar accuracy to the underlying GNN. Furthermore, when the ground truth is expressible in C2, our approach outperforms the GNN.
Introduction
We present and evaluate an algorithm for distilling \emph{Graph Neural Networks} (GNNs) into a symbolic model. Our distillation algorithm relies on a novel model called \emph{Iterated Decision Tree} (IDT), which is tailored to represent logical formulas represented by GNNs.
GNNs play a crucial role in safety-critical applications like drug discovery and in cost-critical applications like large-scale transport routing. However, most GNN models are black-box in nature and their internal representations are opaque to any human or computer-aided formal scrutiny. Hence, interpreting and explaining GNN predictions is a fundamental problem of significant research interest. Although many results have characterized the expressivity of GNNs in terms of formal languages like first-order logic, extracting the logical classifiers expressed by GNNs remains largely unexplored. We aim to fill this gap by developing a distillation model aimed at extracting logical classifiers expressed by GNNs.
The key theoretical insight behind our model is that all the first-order logic classifiers expressed by GNNs are expressible in first-order logic with only two variables and counting quantifiers ($\C2$). Hence our model, the IDT, is designed to express any $\C2$ formula. An IDT consists of a sequence of decision trees. Each decision tree expresses a number of unary $\C2$ formulas of quantifier depth one. Combining multiple such decision trees enables us to express formulas of larger quantifier depth.
Additionally, we propose an extension of $\C2$ that can capture operations like mean aggregation, which are common in GNNs, and incorporate it into IDTs.
Our distillation algorithm is able to exploit intermediate node representations from each message-passing layer of a GNN to iteratively learn decision trees of an IDT. Although the learning process for IDTs is guided by the GNN, our empirical results show that the logic-based inductive bias incentivizes succinct and interpretable models.
We test IDTs on multiple synthetic and real-world datasets, performing distillation on two prominent GNN architectures, Graph Isomorphism Networks (GIN) <|cite_start|> (Reference: How Powerful are Graph Neural Networks?: Graph Neural Networks (GNNs) are an effective framework for representation learning of graphs. GNNs follow a neighborhood aggregation scheme, where the representation vector of a node is computed by recursively aggregating and transforming representation vectors of its neighboring nodes. Many GNN variants have been proposed and have achieved state-of-the-art results on both node and graph classification tasks. However, despite GNNs revolutionizing graph representation learning, there is limited understanding of their representational properties and limitations. Here, we present a theoretical framework for analyzing the expressive power of GNNs to capture different graph structures. Our results characterize the discriminative power of popular GNN variants, such as Graph Convolutional Networks and GraphSAGE, and show that they cannot learn to distinguish certain simple graph structures. We then develop a simple architecture that is provably the most expressive among the class of GNNs and is as powerful as the Weisfeiler-Lehman graph isomorphism test. We empirically validate our theoretical findings on a number of graph classification benchmarks, and demonstrate that our model achieves state-of-the-art performance.) <|cite_end|> and Graph Convolution Networks (GCN) <|cite_start|> (Reference: Semi-Supervised Classification with Graph Convolutional Networks: We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.) <|cite_end|>. Our algorithm consistently distills IDTs that are succinct and have comparable predictive performance to the underlying GNN. Furthermore, when the ground truth is a $\C2$ formula, the distilled IDT exhibits better generalization, outperforming the GNN on the test data. Qualitatively, we find that our method can provide new insights.
For instance, on the AIDS dataset <|cite_start|> (Reference: {{IAM: 최근 클라우드를 이용한 다양한 형태의 서비스가 제공되고 있다. 이러한 클라우드 기반의 서비스는 최근 공공기관 및 대형포털 사이트 등 다양한 곳에서 사용되고 있으나, 개인정보관리의 관리의 부재로 고객 정보가 유출 되는 등 다양한 사고가 발생하고 있다. 이에 사용자의 접근과 권한 관리, 인증, 감사 등을 수행하는 계정 및 권한관리시스템이 업무의 효율성 향상은 물론, 시스템에 대한 접근과 계정 관리를 위한 안전하고 효과적인 방안으로 대두되고 있다. 이러한 IAM 기술에 대한 분석을 통하여 안전하고 효과적인 운영, 관리가 필요하다. 따라서 본고에서는 최근 클라우드 기반 IAM 기술 동향 및 위협 요소에 대하여 알아본다.) <|cite_end|>, IDTs infer a very simple high-performing rule that achieves over 99\% classification accuracy. This rule classifies graphs based on their number of nodes being smaller or larger than 12. To the best of our knowledge, none of the existing GNNs or explanation methods have been able to infer this rule.
In the next section we discuss the relevant related work. In Section \ref{Sec:Preliminaries} we discuss the necessary background on graphs, logic and GNNs. We present IDTs in Section~\ref{Sec:IDT} and show how IDTs can be learned from GNNs in Section~\ref{Sec:Training}.
In Section~\ref{sec:cp}, we introduce an extension of $\C2$, which allows us to learn more expressive IDTs. Finally, we empirically evaluate IDTs in Section~\ref{Sec:Experiments}. We analyze some of the obtained logical explanations in Section~\ref{Sec:Explainability}. We summarize our work and discuss future research directions in Section~\ref{Sec:Conclusion}.
Related Work
\label{sec:relatedwork}
Our work is related to explanation methods of GNNs <|cite_start|> (Reference: Explaining the Explainers in Graph Neural Networks: a Comparative Study: Following a fast initial breakthrough in graph based learning, Graph Neural Networks (GNNs) have reached a widespread application in many science and engineering fields, prompting the need for methods to understand their decision process. GNN explainers have started to emerge in recent years, with a multitude of methods both novel or adapted from other domains. To sort out this plethora of alternative approaches, several studies have benchmarked the performance of different explainers in terms of various explainability metrics. However, these earlier works make no attempts at providing insights into why different GNN architectures are more or less explainable, or which explainer should be preferred in a given setting. In this survey, we fill these gaps by devising a systematic experimental study, which tests ten explainers on eight representative architectures trained on six carefully designed graph and node classification datasets. With our results we provide key insights on the choice and applicability of GNN explainers, we isolate key components that make them usable and successful and provide recommendations on how to avoid common interpretation pitfalls. We conclude by highlighting open questions and directions of possible future research.) <|cite_end|> and to logical approaches <|cite_start|> (Reference: The Logical Expressiveness of Graph Neural Networks: The ability of graph neural networks (GNNs) for distinguishing nodes in graphs has been recently characterized in terms of the Weisfeiler-Lehman (WL) test for checking graph isomorphism. This characterization, however, does not settle the issue of which Boolean node classifiers (i.e., functions classifying nodes in graphs as true or false) can be expressed by GNNs. We tackle this problem by focusing on Boolean classifiers expressible as formulas in the logic FOC2, a well-studied fragment of first order logic. FOC2 is tightly related to the WL test, and hence to GNNs. We start by studying a popular class of GNNs, which we call AC-GNNs, in which the features of each node in the graph are updated, in successive layers, only in terms of the features of its neighbors. We show that this class of GNNs is too weak to capture all FOC2 classifiers, and provide a syntactic characterization of the largest subclass of FOC2 classifiers that can be captured by AC-GNNs. This subclass coincides with a logic heavily used by the knowledge representation community. We then look at what needs to be added to AC-GNNs for capturing all FOC2 classifiers. We show that it suffices to add readout functions, which allow to update the features of a node not only in terms of its neighbors, but also in terms of a global attribute vector. We call GNNs of this kind ACR-GNNs. We experimentally validate our findings showing that, on synthetic data conforming to FOC2 formulas, AC-GNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training.) <|cite_end|> <|cite_start|> (Reference: The Logic of Graph Neural Networks: Graph neural networks (GNNs) are deep learning architectures for machine learning problems on graphs. It has recently been shown that the expressiveness of GNNs can be characterised precisely by the combinatorial Weisfeiler-Leman algorithms and by finite variable counting logics. The correspondence has even led to new, higher-order GNNs corresponding to the WL algorithm in higher dimensions. The purpose of this paper is to explain these descriptive characterisations of GNNs.) <|cite_end|> <|cite_start|> (Reference: The Descriptive Complexity of Graph Neural Networks: We analyse the power of graph neural networks (GNNs) in terms of Boolean circuit complexity and descriptive complexity. We prove that the graph queries that can be computed by a polynomial-size bounded-depth family of GNNs are exactly those definable in the guarded fragment GFO+C of first-order logic with counting and with built-in relations. This puts GNNs in the circuit complexity class (non-uniform) TC^0. Remarkably, the GNN families may use arbitrary real weights and a wide class of activation functions that includes the standard ReLU, logistic "sigmod", and hyperbolic tangent functions. If the GNNs are allowed to use random initialisation and global readout (both standard features of GNNs widely used in practice), they can compute exactly the same queries as bounded depth Boolean circuits with threshold gates, that is, exactly the queries in TC^0. Moreover, we show that queries computable by a single GNN with piecewise linear activations and rational weights are definable in GFO+C without built-in relations. Therefore, they are contained in uniform TC^0.) <|cite_end|> for analyzing their expressivity.
Explanation methods aim to derive insights about the process underlying the \emph{model predictions}. Although IDTs may aid such understanding, our goal is different. We aim to distill an interpretable classification model for the \emph{data}, while using trained GNN model as guidance for the learning process. Hence, we want our model to not only be interpretable, but also to generalize well. In the literature, methods that provide a global explainer, i.e., an interpretable surrogate model <|cite_start|> (Reference: Global Explainability of GNNs via Logic Combination of Learned Concepts: While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work, we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer provides accurate and human-interpretable global explanations that are perfectly aligned with ground-truth explanations (on synthetic data) or match existing domain knowledge (on real-world data). Extracted formulas are faithful to the model predictions, to the point of providing insights into some occasionally incorrect rules learned by the model, making GLGExplainer a promising diagnostic tool for learned GNNs.) <|cite_end|>, can be adapted to yield classification models for the underlying data. However, such models come at a significant cost to the accuracy.
Our work is also loosely connected to the general problem of learning desision trees from neural networks. This problem has already been extensively investigated for tabular data <|cite_start|> (Reference: Extracting Tree-Structured Representations of Trained Networks: A significant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, TREPAN, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that TREPAN is able to produce decision trees that maintain a high level of fidelity to their respective networks while being comprehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large networks and problems with high-dimensional input spaces.) <|cite_end|> <|cite_start|> (Reference: Extracting Decision Trees from Trained Neural Networks: Neural Networks are successful in acquiring hidden knowledge in datasets. Their biggest weakness is that the knowledge they acquire is represented in a form not understandable to humans. Researchers tried to address this problem by extracting rules from trained Neural Networks. Most of the proposed rule extraction methods required specialized type of Neural Networks; some required binary inputs and some were computationally expensive. Craven proposed extracting MofN type Decision Trees from Neural Networks. We believe MofN type Decision Trees are only good for MofN type problems and trees created for regular high dimensional real world problems may be very complex. In this paper, we introduced a new method for extracting regular C4.5 like Decision Trees from trained Neural Networks. We showed that the new method (DecText) is effective in extracting high fidelity trees from trained networks. We also introduced a new discretization technique to make DecText be able to handle continuous features and a new pruning technique for finding simplest tree with the highest fidelity.) <|cite_end|> <|cite_start|> (Reference: Extracting Decision Trees from Trained Neural Networks: Neural Networks are successful in acquiring hidden knowledge in datasets. Their biggest weakness is that the knowledge they acquire is represented in a form not understandable to humans. Researchers tried to address this problem by extracting rules from trained Neural Networks. Most of the proposed rule extraction methods required specialized type of Neural Networks; some required binary inputs and some were computationally expensive. Craven proposed extracting MofN type Decision Trees from Neural Networks. We believe MofN type Decision Trees are only good for MofN type problems and trees created for regular high dimensional real world problems may be very complex. In this paper, we introduced a new method for extracting regular C4.5 like Decision Trees from trained Neural Networks. We showed that the new method (DecText) is effective in extracting high fidelity trees from trained networks. We also introduced a new discretization technique to make DecText be able to handle continuous features and a new pruning technique for finding simplest tree with the highest fidelity.) <|cite_end|> <|cite_start|> (Reference: Decision Tree Extraction from Trained Neural Networks: Artificial Neural Networks (ANNs) have proved both a popular and powerful technique for pattern recognition tasks in a number of problem domains. However, the adoption of ANNs in many areas has been impeded, due to their inability to explain how they came to their conclusion, or show in a readily comprehendible form the knowledge they have obtained. This paper presents an algorithm that addresses these problems. The algorithm achieves this by extracting a Decision Tree, a graphical and easily understood symbolic representation of a decision process, from a trained ANN. The algorithm does not make assumptions about the ANN’s architecture or training algorithm; therefore, it can be applied to any type of ANN. The algorithm is empirically compared with Quinlan’s C4.5 (a common Decision Tree induction algorithm) using standard benchmark datasets. For most of the datasets used in the evaluation, the new algorithm is shown to extract Decision Trees that have a higher predictive accuracy than those induced using C4.5 directly.) <|cite_end|>. Furthermore, recent works have also investigated the tweaking of learning process or the neural architecture itself for learning decision trees <|cite_start|> (Reference: Enhancing Decision Tree Based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization: One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an interpretable surrogate model based on decision trees is presented. Simply fitting a decision tree to a trained NN usually leads to unsatisfactory results in terms of accuracy and fidelity. Using L1-orthogonal regularization during training, however, preserves the accuracy of the NN, while it can be closely approximated by small decision trees. Tests with different data sets confirm that L1-orthogonal regularization yields models of lower complexity and at the same time higher fidelity compared to other regularizers.) <|cite_end|> <|cite_start|> (Reference: Beyond Sparsity: Tree Regularization of Deep Models for Interpretability: The lack of interpretability remains a key barrier to the adoption of deep models in many applications. In this work, we explicitly regularize deep models so human users might step through the process behind their predictions in little time. Specifically, we train deep time-series models so their class-probability predictions have high accuracy while being closely modeled by decision trees with few nodes. Using intuitive toy examples as well as medical tasks for treating sepsis and HIV, we demonstrate that this new tree regularization yields models that are easier for humans to simulate than simpler L1 or L2 penalties without sacrificing predictive power.) <|cite_end|> <|cite_start|> (Reference: Deep Neural Decision Trees: Deep neural networks have been proven powerful at processing perceptual data, such as images and audio. However for tabular data, tree-based models are more popular. A nice property of tree-based models is their natural interpretability. In this work, we present Deep Neural Decision Trees (DNDT) -- tree models realised by neural networks. A DNDT is intrinsically interpretable, as it is a tree. Yet as it is also a neural network (NN), it can be easily implemented in NN toolkits, and trained with gradient descent rather than greedy splitting. We evaluate DNDT on several tabular datasets, verify its efficacy, and investigate similarities and differences between DNDT and vanilla decision trees. Interestingly, DNDT self-prunes at both split and feature-level.) <|cite_end|> <|cite_start|> (Reference: Deep neural decision forests: We present Deep Neural Decision Forests - a novel approach that unifies classification trees with the representation learning functionality known from deep convolutional networks, by training them in an end-to-end manner. To combine these two worlds, we introduce a stochastic and differentiable decision tree model, which steers the representation learning usually conducted in the initial layers of a (deep) convolutional network. Our model differs from conventional deep networks because a decision forest provides the final predictions and it differs from conventional decision forests since we propose a principled, joint and global optimization of split and leaf node parameters. We show experimental results on benchmark machine learning datasets like MNIST and ImageNet and find on-par or superior results when compared to state-of-the-art deep models. Most remarkably, we obtain Top5-Errors of only 7.84%/6.38% on ImageNet validation data when integrating our forests in a single-crop, single/seven model GoogLeNet architecture, respectively. Thus, even without any form of training data set augmentation we are improving on the 6.67% error obtained by the best GoogLeNet architecture (7 models, 144 crops).) <|cite_end|>. Although these results are related to our approach in spirit, our work is fundamentally different in its theoretical motivation and the learning procedure. GNNs expressivity is deeply connected to that of first-order logic <|cite_start|> (Reference: The Logical Expressiveness of Graph Neural Networks: The ability of graph neural networks (GNNs) for distinguishing nodes in graphs has been recently characterized in terms of the Weisfeiler-Lehman (WL) test for checking graph isomorphism. This characterization, however, does not settle the issue of which Boolean node classifiers (i.e., functions classifying nodes in graphs as true or false) can be expressed by GNNs. We tackle this problem by focusing on Boolean classifiers expressible as formulas in the logic FOC2, a well-studied fragment of first order logic. FOC2 is tightly related to the WL test, and hence to GNNs. We start by studying a popular class of GNNs, which we call AC-GNNs, in which the features of each node in the graph are updated, in successive layers, only in terms of the features of its neighbors. We show that this class of GNNs is too weak to capture all FOC2 classifiers, and provide a syntactic characterization of the largest subclass of FOC2 classifiers that can be captured by AC-GNNs. This subclass coincides with a logic heavily used by the knowledge representation community. We then look at what needs to be added to AC-GNNs for capturing all FOC2 classifiers. We show that it suffices to add readout functions, which allow to update the features of a node not only in terms of its neighbors, but also in terms of a global attribute vector. We call GNNs of this kind ACR-GNNs. We experimentally validate our findings showing that, on synthetic data conforming to FOC2 formulas, AC-GNNs struggle to fit the training data while ACR-GNNs can generalize even to graphs of sizes not seen during training.) <|cite_end|> <|cite_start|> (Reference: The Descriptive Complexity of Graph Neural Networks: We analyse the power of graph neural networks (GNNs) in terms of Boolean circuit complexity and descriptive complexity. We prove that the graph queries that can be computed by a polynomial-size bounded-depth family of GNNs are exactly those definable in the guarded fragment GFO+C of first-order logic with counting and with built-in relations. This puts GNNs in the circuit complexity class (non-uniform) TC^0. Remarkably, the GNN families may use arbitrary real weights and a wide class of activation functions that includes the standard ReLU, logistic "sigmod", and hyperbolic tangent functions. If the GNNs are allowed to use random initialisation and global readout (both standard features of GNNs widely used in practice), they can compute exactly the same queries as bounded depth Boolean circuits with threshold gates, that is, exactly the queries in TC^0. Moreover, we show that queries computable by a single GNN with piecewise linear activations and rational weights are definable in GFO+C without built-in relations. Therefore, they are contained in uniform TC^0.) <|cite_end|>. Hence, using logic-based decision trees is a natural choice for learning decision trees from GNNs.
Explanation methods that distill surrogate models GNNs come closest to our approach. <|cite_start|> (Reference: Global Explainability of GNNs via Logic Combination of Learned Concepts: While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work, we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer provides accurate and human-interpretable global explanations that are perfectly aligned with ground-truth explanations (on synthetic data) or match existing domain knowledge (on real-world data). Extracted formulas are faithful to the model predictions, to the point of providing insights into some occasionally incorrect rules learned by the model, making GLGExplainer a promising diagnostic tool for learned GNNs.) <|cite_end|> first derive instance-level local subgraphs as explanations and then cluster them to extrapolate a model-level Boolean formula using the subgraphs as concepts. <|cite_start|> (Reference: XGNN: Towards Model-Level Explanations of Graph Neural Networks: Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information, which have achieved promising performance on many graph tasks. However, GNNs are mostly treated as black-boxes and lack human intelligible explanations. Thus, they cannot be fully trusted and used in certain application domains if GNN models cannot be explained. In this work, we propose a novel approach, known as XGNN, to interpret GNNs at the model-level. Our approach can provide high-level insights and generic understanding of how GNNs work. In particular, we propose to explain GNNs by training a graph generator so that the generated graph patterns maximize a certain prediction of the model.We formulate the graph generation as a reinforcement learning task, where for each step, the graph generator predicts how to add an edge into the current graph. The graph generator is trained via a policy gradient method based on information from the trained GNNs. In addition, we incorporate several graph rules to encourage the generated graphs to be valid. Experimental results on both synthetic and real-world datasets show that our proposed methods help understand and verify the trained GNNs. Furthermore, our experimental results indicate that the generated graphs can provide guidance on how to improve the trained GNNs.) <|cite_end|> base their approach on input-optimization, i.e. globally reducing graphs to a number of instances for which the explanation is then given.
Their approach requires prior domain knowledge.
Most recently, first compute (almost) categorical layer wise node representations using GNN layers with Gumbel-Softmax update functions.
Subsequently, they replace the neural networks by decision trees trained on the categorical node representations.
Their approach results in an interpretable message passing scheme based on intermediate categorical node states, but requires to train a specific GNN architecture. All three approaches use graphs, sub-graphs or their combinations as the explanation model. This restricts these methods, as many simple and important constraints, e.g. a graph has more than 12 nodes, can not easily be expressed in terms of subgraphs. <|paper_end|> | [
"<|reference_start|> Global Explainability of GNNs via Logic Combination of Learned Concepts: While instance-level explanation of GNN is a well-studied problem with plenty of approaches being developed, providing a global explanation for the behaviour of a GNN is much less explored, despite its potential in interpretability and debugging. Existing solutions either simply list local explanations for a given class, or generate a synthetic prototypical graph with maximal score for a given class, completely missing any combinatorial aspect that the GNN could have learned. In this work, we propose GLGExplainer (Global Logic-based GNN Explainer), the first Global Explainer capable of generating explanations as arbitrary Boolean combinations of learned graphical concepts. GLGExplainer is a fully differentiable architecture that takes local explanations as inputs and combines them into a logic formula over graphical concepts, represented as clusters of local explanations. Contrary to existing solutions, GLGExplainer provides accurate and human-interpretable global explanations that are perfectly aligned with ground-truth explanations (on synthetic data) or match existing domain knowledge (on real-world data). Extracted formulas are faithful to the model predictions, to the point of providing insights into some occasionally incorrect rules learned by the model, making GLGExplainer a promising diagnostic tool for learned GNNs. <|reference_end|>",
"<|reference_start|> Extracting Tree-Structured Representations of Trained Networks: A significant limitation of neural networks is that the representations they learn are usually incomprehensible to humans. We present a novel algorithm, TREPAN, for extracting comprehensible, symbolic representations from trained neural networks. Our algorithm uses queries to induce a decision tree that approximates the concept represented by a given network. Our experiments demonstrate that TREPAN is able to produce decision trees that maintain a high level of fidelity to their respective networks while being comprehensible and accurate. Unlike previous work in this area, our algorithm is general in its applicability and scales well to large networks and problems with high-dimensional input spaces. <|reference_end|>",
"<|reference_start|> Beyond Sparsity: Tree Regularization of Deep Models for Interpretability: The lack of interpretability remains a key barrier to the adoption of deep models in many applications. In this work, we explicitly regularize deep models so human users might step through the process behind their predictions in little time. Specifically, we train deep time-series models so their class-probability predictions have high accuracy while being closely modeled by decision trees with few nodes. Using intuitive toy examples as well as medical tasks for treating sepsis and HIV, we demonstrate that this new tree regularization yields models that are easier for humans to simulate than simpler L1 or L2 penalties without sacrificing predictive power. <|reference_end|>",
"<|reference_start|> The Descriptive Complexity of Graph Neural Networks: We analyse the power of graph neural networks (GNNs) in terms of Boolean circuit complexity and descriptive complexity. We prove that the graph queries that can be computed by a polynomial-size bounded-depth family of GNNs are exactly those definable in the guarded fragment GFO+C of first-order logic with counting and with built-in relations. This puts GNNs in the circuit complexity class (non-uniform) TC^0. Remarkably, the GNN families may use arbitrary real weights and a wide class of activation functions that includes the standard ReLU, logistic \"sigmod\", and hyperbolic tangent functions. If the GNNs are allowed to use random initialisation and global readout (both standard features of GNNs widely used in practice), they can compute exactly the same queries as bounded depth Boolean circuits with threshold gates, that is, exactly the queries in TC^0. Moreover, we show that queries computable by a single GNN with piecewise linear activations and rational weights are definable in GFO+C without built-in relations. Therefore, they are contained in uniform TC^0. <|reference_end|>"
] | [
7,
8,
13,
17
] | {"<|cite_1|>": "arxiv-174692", "<|cite_2|>": "arxiv-105493", "<|cite_3|>": "ss-1097844", "<|cite_4|>": "arxiv-457648", "<|multi_cite_5_1|>": "ss-1364728", "<|multi_cite_5_2|>": "arxiv-337819", "<|multi_cite_5_3|>": "arxiv-487162", "<|cite_6|>": "arxiv-453684", "<|multi_cite_7_1|>": "ss-682234", "<|multi_cite_7_2|>": "ss-1533026", "<|multi_cite_7_3|>": "ss-1533026", "<|multi_cite_7_4|>": "ss-1364729", "<|multi_cite_8_1|>": "ss-1364730", "<|multi_cite_8_2|>": "arxiv-140336", "<|multi_cite_8_3|>": "arxiv-162950", "<|multi_cite_8_4|>": "ss-1527814", "<|multi_cite_9_1|>": "ss-1364728", "<|multi_cite_9_2|>": "arxiv-487162", "<|cite_10|>": "arxiv-453684", "<|cite_11|>": "arxiv-269404"} |
2010.12460 | <|paper_start|> Title: Adaptive Gradient Quantization for Data-Parallel SGD
Abstract: Adaptive Gradient Quantization for Data-Parallel SGD: Many communication-efficient variants of SGD use gradient quantization schemes. These schemes are often heuristic and fixed over the course of training. We empirically observe that the statistics of gradients of deep models change during the training. Motivated by this observation, we introduce two adaptive quantization schemes, ALQ and AMQ. In both schemes, processors update their compression schemes in parallel by efficiently computing sufficient statistics of a parametric distribution. We improve the validation accuracy by almost 2% on CIFAR-10 and 1% on ImageNet in challenging low-cost communication setups. Our adaptive methods are also significantly more robust to the choice of hyperparameters.
Introduction
\label{sec:intro}
\begin{wrapfigure}{R}{0.4\textwidth}
\vspace*{-0.7cm}
\includegraphics[width=0.39\textwidth]{figs/variance-change}
\caption{Changes in the average variance of normalized gradient
coordinates in a ResNet-$32$ model trained on CIFAR-10. Colors
distinguish different runs with different seeds. Learning rate is
decayed by a factor of $10$ twice at 40K and 60K iterations. The
variance changes rapidly during the first epoch. The next
noticeable change happens after the first learning rate drop and
another one appears after the second drop.}
\label{fig:variance-change}
\vspace*{-0.4cm}
\end{wrapfigure}
Stochastic gradient descent (SGD) and its
variants are currently the method of choice for training deep models.
Yet, large datasets cannot always be trained on
a single computational node due to memory and scalability limitations.
Data-parallel SGD is a remarkably scalable
variant, in particular on multi-GPU systems <|cite_start|> (Reference: {Parallelized Stochastic Gradient Descent: With the increase in available data parallel machine learning has become an increasingly pressing problem. In this paper we present the first parallel stochastic gradient descent algorithm including a detailed analysis and experimental evidence. Unlike prior work on parallel optimization algorithms [5, 7] our variant comes with parallel acceleration guarantees and it poses no overly tight latency constraints, which might only be available in the multicore setting. Our analysis introduces a novel proof technique — contractive mappings to quantify the speed of convergence of parameter distributions to their asymptotic limits. As a side effect this answers the question of how quickly stochastic gradient descent algorithms reach the asymptotically normal regime [1, 8].) <|cite_end|> <|cite_start|> (Reference: {Hogwild: A Lock-Free Approach to Parallelizing Stochastic
Gradient Descent: Stochastic Gradient Descent (SGD) is a popular algorithm that can achieve state-of-the-art performance on a variety of machine learning tasks. Several researchers have recently proposed schemes to parallelize SGD, but all require performance-destroying memory locking and synchronization. This work aims to show using novel theoretical analysis, algorithms, and implementation that SGD can be implemented without any locking. We present an update scheme called HOGWILD! which allows processors access to shared memory with the possibility of overwriting each other's work. We show that when the associated optimization problem is sparse, meaning most gradient updates only modify small parts of the decision variable, then HOGWILD! achieves a nearly optimal rate of convergence. We demonstrate experimentally that HOGWILD! outperforms alternative schemes that use locking by an order of magnitude.) <|cite_end|> <|cite_start|> (Reference: Large Scale Distributed Deep Networks: Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm.) <|cite_end|> <|cite_start|> (Reference: Deep learning with COTS HPC systems: Scaling up deep learning algorithms has been shown to lead to increased performance in benchmark tasks and to enable discovery of complex high-level features. Recent efforts to train extremely large networks (with over 1 billion parameters) have relied on cloudlike computing infrastructure and thousands of CPU cores. In this paper, we present technical details and results from our own system based on Commodity Off-The-Shelf High Performance Computing (COTS HPC) technology: a cluster of GPU servers with Infiniband interconnects and MPI. Our system is able to train 1 billion parameter networks on just 3 machines in a couple of days, and we show that it can scale to networks with over 11 billion parameters using just 16 machines. As this infrastructure is much more easily marshaled by others, the approach enables much wider-spread research with extremely large neural networks.) <|cite_end|> <|cite_start|> (Reference: Project Adam: Building an efficient and scalable deep learning training system: Large deep neural network models have recently demonstrated state-of-the-art accuracy on hard visual recognition tasks. Unfortunately such models are extremely time consuming to train and require large amount of compute cycles. We describe the design and implementation of a distributed system called Adam comprised of commodity server machines to train such models that exhibits world-class performance, scaling and task accuracy on visual recognition tasks. Adam achieves high efficiency and scalability through whole system co-design that optimizes and balances workload computation and communication. We exploit asynchrony throughout the system to improve performance and show that it additionally improves the accuracy of trained models. Adam is significantly more efficient and scalable than was previously thought possible and used 30x fewer machines to train a large 2 billion connection model to 2x higher accuracy in comparable time on the ImageNet 22,000 category image classification task than the system that previously held the record for this benchmark. We also show that task accuracy improves with larger models. Our results provide compelling evidence that a distributed systems-driven approach to deep learning using current training algorithms is worth pursuing.) <|cite_end|> <|cite_start|> (Reference: Scaling distributed machine learning with the Parameter Server: We propose a parameter server framework for distributed machine learning problems. Both data and workloads are distributed over worker nodes, while the server nodes maintain globally shared parameters, represented as dense or sparse vectors and matrices. The framework manages asynchronous data communication between nodes, and supports flexible consistency models, elastic scalability, and continuous fault tolerance.
To demonstrate the scalability of the proposed framework, we show experimental results on petabytes of real data with billions of examples and parameters on problems ranging from Sparse Logistic Regression to Latent Dirichlet Allocation and Distributed Sketching.) <|cite_end|> <|cite_start|> (Reference: Asynchronous stochastic convex optimization: the noise is in the noise and SGD don't care: We show that asymptotically, completely asynchronous stochastic gradient procedures achieve optimal (even to constant factors) convergence rates for the solution of convex optimization problems under nearly the same conditions required for asymptotic optimality of standard stochastic gradient procedures. Roughly, the noise inherent to the stochastic approximation scheme dominates any noise from asynchrony. We also give empirical evidence demonstrating the strong performance of asynchronous, parallel stochastic optimization schemes, demonstrating that the robustness inherent to stochastic approximation problems allows substantially faster parallel and asynchronous solution methods.) <|cite_end|> <|cite_start|> (Reference: Petuum: A New Platform for Distributed Machine Learning on Big Data: What is a systematic way to efficiently apply a wide spectrum of advanced ML programs to industrial scale problems, using Big Models (up to 100s of billions of parameters) on Big Data (up to terabytes or petabytes)? Modern parallelization strategies employ fine-grained operations and scheduling beyond the classic bulk-synchronous processing paradigm popularized by MapReduce, or even specialized graph-based execution that relies on graph representations of ML programs. The variety of approaches tends to pull systems and algorithms design in different directions, and it remains difficult to find a universal platform applicable to a wide range of ML programs at scale. We propose a general-purpose framework that systematically addresses data- and model-parallel challenges in large-scale ML, by observing that many ML programs are fundamentally optimization-centric and admit error-tolerant, iterative-convergent algorithmic solutions. This presents unique opportunities for an integrative system design, such as bounded-error network synchronization and dynamic scheduling based on ML program structure. We demonstrate the efficacy of these system designs versus well-known implementations of modern ML algorithms, allowing ML programs to run in much less time and at considerably larger model sizes, even on modestly-sized compute clusters.) <|cite_end|> <|cite_start|> (Reference: Deep learning with Elastic Averaging SGD: We study the problem of stochastic optimization for deep learning in the parallel computing environment under communication constraints. A new algorithm is proposed in this setting where the communication and coordination of work among concurrent processes (local workers), is based on an elastic force which links the parameters they compute with a center variable stored by the parameter server (master). The algorithm enables the local workers to perform more exploration, i.e. the algorithm allows the local variables to fluctuate further from the center variable by reducing the amount of communication between local workers and the master. We empirically demonstrate that in the deep learning setting, due to the existence of many local optima, allowing more exploration can lead to the improved performance. We propose synchronous and asynchronous variants of the new algorithm. We provide the stability analysis of the asynchronous variant in the round-robin scheme and compare it with the more common parallelized method ADMM. We show that the stability of EASGD is guaranteed when a simple stability condition is satisfied, which is not the case for ADMM. We additionally propose the momentum-based version of our algorithm that can be applied in both synchronous and asynchronous settings. Asynchronous variant of the algorithm is applied to train convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Experiments demonstrate that the new algorithm accelerates the training of deep architectures compared to DOWNPOUR and other common baseline approaches and furthermore is very communication efficient.) <|cite_end|>.
However, despite its many advantages, distribution
introduces new challenges for optimization algorithms. In particular,
data-parallel SGD has large communication cost due to the need to transmit potentially huge gradient vectors.
Ideally, we want distributed optimization methods that match the performance of
SGD on a single hypothetical super machine, while paying a negligible
communication cost.
A common approach to reducing the communication cost in data-parallel SGD is
gradient compression and quantization <|cite_start|> (Reference: Large Scale Distributed Deep Networks: Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm.) <|cite_end|> <|cite_start|> (Reference: {1-Bit Stochastic Gradient Descent and Its Application to Data-Parallel Distributed Training of Speech DNNs: We show empirically that in SGD training of deep neural networks, one can, at no or nearly no loss of accuracy, quantize the gradients aggressively—to but one bit per value—if the quantization error is carried forward across minibatches (error feedback). This size reduction makes it feasible to parallelize SGD through data-parallelism with fast processors like recent GPUs. We implement data-parallel deterministically distributed SGD by combining this finding with AdaGrad, automatic minibatch-size selection, double buffering, and model parallelism. Unexpectedly, quantization benefits AdaGrad, giving a small accuracy gain. For a typical Switchboard DNN with 46M parameters, we reach computation speeds of 27k frames per second (kfps) when using 2880 samples per minibatch, and 51kfps with 16k, on a server with 8 K20X GPUs. This corresponds to speed-ups over a single GPU of 3.6 and 6.3, respectively. 7 training passes over 309h of data complete in under 7h. A 160M-parameter model training processes 3300h of data in under 16h on 20 dual-GPU servers—a 10 times speed-up—albeit at a small accuracy loss.) <|cite_end|> <|cite_start|> (Reference: Deep Learning with Limited Numerical Precision: Training of large-scale deep neural networks is often constrained by the available computational resources. We study the effect of limited precision data representation and computation on neural network training. Within the context of low-precision fixed-point computations, we observe the rounding scheme to play a crucial role in determining the network's behavior during training. Our results show that deep networks can be trained using only 16-bit wide fixed-point number representation when using stochastic rounding, and incur little to no degradation in the classification accuracy. We also demonstrate an energy-efficient hardware accelerator that implements low-precision fixed-point arithmetic with stochastic rounding.) <|cite_end|> <|cite_start|> (Reference: TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems: TensorFlow is an interface for expressing machine learning algorithms, and an implementation for executing such algorithms. A computation expressed using TensorFlow can be executed with little or no change on a wide variety of heterogeneous systems, ranging from mobile devices such as phones and tablets up to large-scale distributed systems of hundreds of machines and thousands of computational devices such as GPU cards. The system is flexible and can be used to express a wide variety of algorithms, including training and inference algorithms for deep neural network models, and it has been used for conducting research and for deploying machine learning systems into production across more than a dozen areas of computer science and other fields, including speech recognition, computer vision, robotics, information retrieval, natural language processing, geographic information extraction, and computational drug discovery. This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google. The TensorFlow API and a reference implementation were released as an open-source package under the Apache 2.0 license in November, 2015 and are available at www.tensorflow.org.) <|cite_end|> <|cite_start|> (Reference: DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients: We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients. In particular, during backward pass, parameter gradients are stochastically quantized to low bitwidth numbers before being propagated to convolutional layers. As convolutions during forward/backward passes can now operate on low bitwidth weights and activations/gradients respectively, DoReFa-Net can use bit convolution kernels to accelerate both training and inference. Moreover, as bit convolutions can be efficiently implemented on CPU, FPGA, ASIC and GPU, DoReFa-Net opens the way to accelerate training of low bitwidth neural network on these hardware. Our experiments on SVHN and ImageNet datasets prove that DoReFa-Net can achieve comparable prediction accuracy as 32-bit counterparts. For example, a DoReFa-Net derived from AlexNet that has 1-bit weights, 2-bit activations, can be trained from scratch using 6-bit gradients to get 46.1\% top-1 accuracy on ImageNet validation set. The DoReFa-Net AlexNet model is released publicly.) <|cite_end|> <|cite_start|> (Reference: TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning: High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {-1,0,1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet does not incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available.) <|cite_end|> <|cite_start|> (Reference: signSGD: Compressed Optimisation for Non-Convex Problems: Training large neural networks requires distributing learning across multiple workers, where the cost of communicating gradients can be a significant bottleneck. signSGD alleviates this problem by transmitting just the sign of each minibatch stochastic gradient. We prove that it can get the best of both worlds: compressed gradients and SGD-level convergence rate. The relative $\ell_1/\ell_2$ geometry of gradients, noise and curvature informs whether signSGD or SGD is theoretically better suited to a particular problem. On the practical side we find that the momentum counterpart of signSGD is able to match the accuracy and convergence speed of Adam on deep Imagenet models. We extend our theory to the distributed setting, where the parameter server uses majority vote to aggregate gradient signs from each worker enabling 1-bit compression of worker-server communication in both directions. Using a theorem by Gauss we prove that majority vote can achieve the same reduction in variance as full precision distributed SGD. Thus, there is great promise for sign-based optimisation schemes to achieve fast communication and fast convergence. Code to reproduce experiments is to be found at https://github.com/jxbz/signSGD .) <|cite_end|>. In full-precision data-parallel SGD, each processor
broadcasts its locally computed stochastic gradient vector at every iteration,
whereas in quantized data-parallel SGD, each processor compresses its
stochastic gradient before broadcasting. Current quantization methods are
either designed heuristically or fixed prior to training. Convergence rates in a stochastic optimization problem are controlled by the trace of the gradient covariance matrix, which is referred as the gradient variance in this paper <|cite_start|> (Reference: Convex Optimization: Algorithms and Complexity: This monograph presents the main complexity theorems in convex optimization and their corresponding algorithms. Starting from the fundamental theory of black-box optimization, the material progresses towards recent advances in structural optimization and stochastic optimization. Our presentation of black-box optimization, strongly influenced by Nesterov's seminal book and Nemirovski's lecture notes, includes the analysis of cutting plane methods, as well as (accelerated) gradient descent schemes. We also pay special attention to non-Euclidean settings (relevant algorithms include Frank-Wolfe, mirror descent, and dual averaging) and discuss their relevance in machine learning. We provide a gentle introduction to structural optimization with FISTA (to optimize a sum of a smooth and a simple non-smooth term), saddle-point mirror prox (Nemirovski's alternative to Nesterov's smoothing), and a concise description of interior point methods. In stochastic optimization we discuss stochastic gradient descent, mini-batches, random coordinate descent, and sublinear algorithms. We also briefly touch upon convex relaxation of combinatorial problems and the use of randomness to round solutions, as well as random walks based methods.) <|cite_end|>. As
\cref{fig:variance-change} shows, no fixed method can be optimal
throughout the entire training because the distribution of gradients changes.
A quantization method that is optimal at the first iteration will not be optimal
after only a single epoch.
In this paper, we propose two adaptive methods for quantizing the gradients in
data-parallel SGD\@. We study methods that are defined by a norm and a set of quantization levels. In Adaptive
Level Quantization (ALQ), we minimize the excess variance of quantization given
an estimate of the distribution of the gradients. In Adaptive Multiplier
Quantization (AMQ), we minimize the same objective as ALQ by modelling
quantization levels as exponentially spaced levels. AMQ solves for the optimal
value of a single multiplier parametrizing the exponentially spaced levels.
\subsection{Summary of contributions}
\begin{itemize}[leftmargin=*,itemsep=0ex]
\item We propose two adaptive gradient quantization methods, ALQ and AMQ,
in which processors update their compression methods in parallel.
\item We establish an upper bound on the excess variance for any arbitrary
sequence of quantization levels under general normalization that is tight in dimension, an upper
bound on the expected number of communication bits per iteration, and
strong convergence guarantees on a number of problems under standard
assumptions. Our bounds hold for any adaptive method, including ALQ and
AMQ.
\item
We improve the validation accuracy by almost $2\%$ on CIFAR-10 and $1\%$ on
ImageNet in challenging low-cost communication setups. Our adaptive
methods are significantly more robust to the choice of hyperparameters.\footnote{Open source code:
\url{http://github.com/tabrizian/learning-to-quantize}}
\end{itemize}
\subsection{Related work}
Adaptive quantization has been used for speech communication and storage <|cite_start|> (Reference: Adaptive Quantization in Differential PCM Coding of Speech: We describe an adaptive differential PCM (ADPCM) coder which makes instantaneous exponential changes of quantizer step-size. The coder includes a simple first-order predictor and a time-invariant, minimally complex adaptation strategy. Step-size multipliers depend only on the most recent quantizer output, and input signals of unknown variance can be accommodated. We derive appropriate multiplier values from computer simulations with speech signals and with Gauss-Markov inputs. We compare performance of the ADPCM coder with conventional log-PCM, using both objective and subjective criteria. Finally, we describe an economical integrated hardware implementation of the ADPCM coder. We believe that at bit rates of 24 to 32 kb/s, ADPCM provides a robust and efficient technique for speech communication and for digital storage of speech.) <|cite_end|>. In machine learning, several biased and unbiased schemes have been proposed to compress networks and gradients. Recently, lattice-based quantization has been studied for distributed mean estimation and variance reduction <|cite_start|> (Reference: Distributed mean estimation with optimal error bounds: Motivated by applications to distributed optimization and machine learning, we consider the distributed mean estimation problem, in which n nodes are each assigned a multidimensional input vector, and must cooperate to estimate the mean of the input vectors, while minimizing communication. In this paper, we provide the first tight bounds for this problem, in terms of the trade-off between the amount of communication between nodes and the variance of the node estimates relative to the true value of the mean.) <|cite_end|>. In this work, we focus on unbiased and coordinate-wise schemes to compress gradients. <|cite_start|> (Reference: QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding: Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to excellent scalability properties of this algorithm, and to its efficiency in the context of training deep neural networks. A fundamental barrier for parallelizing large-scale SGD is the fact that the cost of communicating the gradient updates between nodes can be very large. Consequently, lossy compression heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always provably converge, and it is not clear whether they are optimal. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes which allow the compression of gradient updates at each node, while guaranteeing convergence under standard assumptions. QSGD allows the user to trade off compression and convergence time: it can communicate a sublinear number of bits per iteration in the model dimension, and can achieve asymptotically optimal communication cost. We complement our theoretical results with empirical data, showing that QSGD can significantly reduce communication cost, while being competitive with standard uncompressed techniques on a variety of real tasks. In particular, experiments show that gradient quantization applied to training of deep neural networks for image classification and automated speech recognition can lead to significant reductions in communication cost, and end-to-end training time. For instance, on 16 GPUs, we are able to train a ResNet-152 network on ImageNet 1.8x faster to full accuracy. Of note, we show that there exist generic parameter settings under which all known network architectures preserve or slightly improve their full accuracy when using quantization.) <|cite_end|> proposed Quantized SGD (QSGD) focusing on the uniform quantization of stochastic
gradients normalized to have unit Euclidean norm. Their experiments illustrate a similar quantization method, where gradients are normalized to have unit $L^\infty$ norm, achieves better performance. We refer to this method as QSGDinf or Qinf in short. <|cite_start|> (Reference: TernGrad: Ternary Gradients to Reduce Communication in Distributed Deep Learning: High network communication cost for synchronizing gradients and parameters is the well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary gradients to accelerate distributed deep learning in data parallelism. Our approach requires only three numerical levels {-1,0,1}, which can aggressively reduce the communication time. We mathematically prove the convergence of TernGrad under the assumption of a bound on gradients. Guided by the bound, we propose layer-wise ternarizing and gradient clipping to improve its convergence. Our experiments show that applying TernGrad on AlexNet does not incur any accuracy loss and can even improve accuracy. The accuracy loss of GoogLeNet induced by TernGrad is less than 2% on average. Finally, a performance model is proposed to study the scalability of TernGrad. Experiments show significant speed gains for various deep neural networks. Our source code is available.) <|cite_end|> proposed TernGrad, which can be viewed as a special case of QSGDinf with three quantization levels. <|cite_start|> (Reference: NUQSGD: Improved communication efficiency for data-parallel SGD via nonuniform quantization: As the size and complexity of models and datasets grow, so does the need for communication-efficient variants of stochastic gradient descent that can be deployed on clusters to perform model fitting in parallel. Alistarh et al. (2017) describe two variants of data-parallel SGD that quantize and encode gradients to lessen communication costs. For the first variant, QSGD, they provide strong theoretical guarantees. For the second variant, which we call QSGDinf, they demonstrate impressive empirical gains for distributed training of large neural networks. Building on their work, we propose an alternative scheme for quantizing gradients and show that it yields stronger theoretical guarantees than exist for QSGD while matching the empirical performance of QSGDinf.) <|cite_end|> proposed nonuniform quantization levels (NUQSGD) and demonstrated
superior empirical results compared to QSGDinf. <|cite_start|> (Reference: Natural Compression for Distributed Deep Learning: Modern deep learning models are often trained in parallel over a collection of distributed machines to reduce training time. In such settings, communication of model updates among machines becomes a significant performance bottleneck and various lossy update compression techniques have been proposed to alleviate this problem. In this work, we introduce a new, simple yet theoretically and practically effective compression technique: natural compression (NC). Our technique is applied individually to all entries of the to-be-compressed update vector and works by randomized rounding to the nearest (negative or positive) power of two, which can be computed in a "natural" way by ignoring the mantissa. We show that compared to no compression, NC increases the second moment of the compressed vector by not more than the tiny factor $\frac{9}{8}$, which means that the effect of NC on the convergence speed of popular training algorithms, such as distributed SGD, is negligible. However, the communications savings enabled by NC are substantial, leading to $3$-$4\times$ improvement in overall theoretical running time. For applications requiring more aggressive compression, we generalize NC to natural dithering, which we prove is exponentially better than the common random dithering technique. Our compression operators can be used on their own or in combination with existing operators for a more aggressive combined effect and offer new state-of-the-art both in theory and practice.) <|cite_end|> proposed natural compression and dithering schemes, where the latter is a special case of logarithmic quantization.
There have been prior attempts at adaptive quantization methods. <|cite_start|> (Reference: Zipml: Training linear models with end-to-end low precision, and a little bit of deep learning: Recently there has been significant interest in training machine-learning models at low precision: by reducing precision, one can reduce computation and communication by one order of magnitude. We examine training at reduced precision, both from a theoretical and practical perspective, and ask: is it possible to train models at end-to-end low precision with provable guarantees? Can this lead to consistent order-ofmagnitude speedups? We mainly focus on linear models, and the answer is yes for linear models. We develop a simple framework called ZipML based on one simple but novel strategy called double sampling. Our ZipML framework is able to execute training at low precision with no bias, guaranteeing convergence, whereas naive quantization would introduce significant bias. We validate our framework across a range of applications, and show that it enables an FPGA prototype that is up to 6.5× faster than an implementation using full 32-bit precision. We further develop a variance-optimal stochastic quantization strategy and show that it can make a significant difference in a variety of settings. When applied to linear models together with double sampling, we save up to another 1.7× in data movement compared with uniform quantization. When training deep networks with quantized models, we achieve higher accuracy than the state-of-theart XNOR-Net. ETH Zurich, Switzerland Massachusetts Institute of Technology, USA IST Austria, Austria University of Rochester, USA. Correspondence to: Hantian Zhang <[email protected]>, Ce Zhang <[email protected]>. Proceedings of the 34 th International Conference on Machine Learning, Sydney, Australia, PMLR 70, 2017. Copyright 2017 by the author(s). (a) Linear Regression (c) 3D Reconstruction 32Bit 12Bit (b) FPGA Speed Up (d) Deep Learning Machine Learning Models Data Movement Channels Speed up because of our techniques Gradient Input Samples Model Linear Models De Sa et la., Alistarh et al., ... 1. Double Sampling 2. Data-Optimal Encoding Stochastic Rounding Very Significant Speed up (Up to 10x) Deep Learning Courbariaux et al., Rastegari et al., ... Data-Optimal Encoding Significant Speed up 0 25 50 75 100 32-bit Full Precision Double Sampling 4-bit #Epochs Tr ai ni ng L os s #Epochs (a) Linear Regression (b) LS-SVM 0 25 50 75 100 .3) <|cite_end|> proposed ZipML, which is an optimal quantization method if all points to be quantized
are known a priori. To find the optimal sequence of
quantization levels, a dynamic program is solved whose computational and
memory cost is quadratic in the number of points to be quantized, which in the case of gradients would correspond to their dimension.
For this reason, ZipML is
impractical for quantizing on the fly, and is in fact used for (offline) dataset compression.
They also proposed an
approximation where a subsampled set of points is used and proposed to scan
the data once to find the subset. However, as we show in this paper, this
one-time scan is not enough as the distribution of stochastic gradients changes
during the training. <|cite_start|> (Reference: LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks: Although weight and activation quantization is an effective approach for Deep Neural Network (DNN) compression and has a lot of potentials to increase inference speed leveraging bit-operations, there is still a noticeable gap in terms of prediction accuracy between the quantized model and the full-precision model. To address this gap, we propose to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization. Our method for learning the quantizers applies to both network weights and activations with arbitrary-bit precision, and our quantizers are easy to train. The comprehensive experiments on CIFAR-10 and ImageNet datasets show that our method works consistently well for various network structures such as AlexNet, VGG-Net, GoogLeNet, ResNet, and DenseNet, surpassing previous quantization methods in terms of accuracy by an appreciable margin. Code available at https://github.com/Microsoft/LQ-Nets) <|cite_end|> proposed LQ-Net, where weights and activations are
quantized such that the inner products can be computed efficiently with bitwise
operations. Compared to LQ-Net, our methods do not need additional memory for
encoding vectors. Concurrent with our work, <|cite_start|> (Reference: Don't Waste Your Bits! Squeeze Activations and Gradients for Deep
Neural Networks via TinyScript: Notice that the points on ∂D must satisfy ŝi = ŝj for some i 6= j, which will in fact results in a loss of parameters. We assume w.l.o.g. that the global minimum ŝ∗ = (ŝ1, ..., ŝ ∗ n−1) “only” satisfies ŝ ∗ 1 = ŝ ∗ 2 (which means that ŝ3 6= ŝ2 and ŝ1 6= 0). Now denote ŝ′ = (ŝ1, ŝ1+ŝ ∗ 3 2 , ..., ŝ ∗ n−1). We will show that Vn̂(ŝ ′) < Vn̂(ŝ ∗), which is a contradiction to the definition of ŝ∗. In fact,) <|cite_end|> proposed to
quantize activations and gradients by modelling them with Weibull
distributions. In comparison, our proposed methods accommodate general
distributions. Further, our approach does not require any assumptions on the
upper bound of the gradients. <|paper_end|> | [
"<|reference_start|> Large Scale Distributed Deep Networks: Recent work in unsupervised feature learning and deep learning has shown that being able to train large models can dramatically improve performance. In this paper, we consider the problem of training a deep network with billions of parameters using tens of thousands of CPU cores. We have developed a software framework called DistBelief that can utilize computing clusters with thousands of machines to train large models. Within this framework, we have developed two algorithms for large-scale distributed training: (i) Downpour SGD, an asynchronous stochastic gradient descent procedure supporting a large number of model replicas, and (ii) Sandblaster, a framework that supports a variety of distributed batch optimization procedures, including a distributed implementation of L-BFGS. Downpour SGD and Sandblaster L-BFGS both increase the scale and speed of deep network training. We have successfully used our system to train a deep network 30x larger than previously reported in the literature, and achieves state-of-the-art performance on ImageNet, a visual object recognition task with 16 million images and 21k categories. We show that these same techniques dramatically accelerate the training of a more modestly- sized deep network for a commercial speech recognition service. Although we focus on and report performance of these methods as applied to training large neural networks, the underlying algorithms are applicable to any gradient-based machine learning algorithm. <|reference_end|>",
"<|reference_start|> Deep learning with Elastic Averaging SGD: We study the problem of stochastic optimization for deep learning in the parallel computing environment under communication constraints. A new algorithm is proposed in this setting where the communication and coordination of work among concurrent processes (local workers), is based on an elastic force which links the parameters they compute with a center variable stored by the parameter server (master). The algorithm enables the local workers to perform more exploration, i.e. the algorithm allows the local variables to fluctuate further from the center variable by reducing the amount of communication between local workers and the master. We empirically demonstrate that in the deep learning setting, due to the existence of many local optima, allowing more exploration can lead to the improved performance. We propose synchronous and asynchronous variants of the new algorithm. We provide the stability analysis of the asynchronous variant in the round-robin scheme and compare it with the more common parallelized method ADMM. We show that the stability of EASGD is guaranteed when a simple stability condition is satisfied, which is not the case for ADMM. We additionally propose the momentum-based version of our algorithm that can be applied in both synchronous and asynchronous settings. Asynchronous variant of the algorithm is applied to train convolutional neural networks for image classification on the CIFAR and ImageNet datasets. Experiments demonstrate that the new algorithm accelerates the training of deep architectures compared to DOWNPOUR and other common baseline approaches and furthermore is very communication efficient. <|reference_end|>",
"<|reference_start|> {1-Bit Stochastic Gradient Descent and Its Application to Data-Parallel Distributed Training of Speech DNNs: We show empirically that in SGD training of deep neural networks, one can, at no or nearly no loss of accuracy, quantize the gradients aggressively—to but one bit per value—if the quantization error is carried forward across minibatches (error feedback). This size reduction makes it feasible to parallelize SGD through data-parallelism with fast processors like recent GPUs. We implement data-parallel deterministically distributed SGD by combining this finding with AdaGrad, automatic minibatch-size selection, double buffering, and model parallelism. Unexpectedly, quantization benefits AdaGrad, giving a small accuracy gain. For a typical Switchboard DNN with 46M parameters, we reach computation speeds of 27k frames per second (kfps) when using 2880 samples per minibatch, and 51kfps with 16k, on a server with 8 K20X GPUs. This corresponds to speed-ups over a single GPU of 3.6 and 6.3, respectively. 7 training passes over 309h of data complete in under 7h. A 160M-parameter model training processes 3300h of data in under 16h on 20 dual-GPU servers—a 10 times speed-up—albeit at a small accuracy loss. <|reference_end|>",
"<|reference_start|> LQ-Nets: Learned Quantization for Highly Accurate and Compact Deep Neural Networks: Although weight and activation quantization is an effective approach for Deep Neural Network (DNN) compression and has a lot of potentials to increase inference speed leveraging bit-operations, there is still a noticeable gap in terms of prediction accuracy between the quantized model and the full-precision model. To address this gap, we propose to jointly train a quantized, bit-operation-compatible DNN and its associated quantizers, as opposed to using fixed, handcrafted quantization schemes such as uniform or logarithmic quantization. Our method for learning the quantizers applies to both network weights and activations with arbitrary-bit precision, and our quantizers are easy to train. The comprehensive experiments on CIFAR-10 and ImageNet datasets show that our method works consistently well for various network structures such as AlexNet, VGG-Net, GoogLeNet, ResNet, and DenseNet, surpassing previous quantization methods in terms of accuracy by an appreciable margin. Code available at https://github.com/Microsoft/LQ-Nets <|reference_end|>"
] | [
2,
8,
10,
24
] | {"<|multi_cite_1_1|>": "ss-1432810", "<|multi_cite_1_3|>": "ss-1110379", "<|multi_cite_1_4|>": "ss-1017681", "<|multi_cite_1_5|>": "ss-1532772", "<|multi_cite_1_6|>": "ss-1013228", "<|multi_cite_1_7|>": "ss-1013229", "<|multi_cite_1_8|>": "ss-1922963", "<|multi_cite_1_9|>": "arxiv-54645", "<|multi_cite_1_10|>": "arxiv-70595", "<|multi_cite_2_1|>": "ss-1017681", "<|multi_cite_2_2|>": "ss-708784", "<|multi_cite_2_3|>": "arxiv-72783", "<|multi_cite_2_4|>": "arxiv-93961", "<|multi_cite_2_5|>": "arxiv-100527", "<|multi_cite_2_6|>": "arxiv-124789", "<|multi_cite_2_7|>": "arxiv-148157", "<|cite_3|>": "arxiv-61101", "<|cite_4|>": "ss-755794", "<|cite_5|>": "ss-1969406", "<|cite_6|>": "arxiv-107402", "<|cite_7|>": "arxiv-124789", "<|cite_8|>": "ss-1969407", "<|cite_9|>": "arxiv-206108", "<|cite_10|>": "ss-1277751", "<|cite_11|>": "arxiv-167267", "<|cite_12|>": "ss-1266099"} |
2112.07791 | <|paper_start|> Title: A Simple But Powerful Graph Encoder for Temporal Knowledge Graph Completion
Abstract: A Simple But Powerful Graph Encoder for Temporal Knowledge Graph Completion: Knowledge graphs contain rich knowledge about various entities and the relational information among them, while temporal knowledge graphs (TKGs) describe and model the interactions of the entities over time. In this context, automatic temporal knowledge graph completion (TKGC) has gained great interest. Recent TKGC methods integrate advanced deep learning techniques, e.g., Transformers, and achieve superior model performance. However, this also introduces a large number of excessive parameters, which brings a heavier burden for parameter optimization. In this paper, we propose a simple but powerful graph encoder for TKGC, called TARGCN. TARGCN is parameter-efficient, and it extensively explores every entity's temporal context for learning contextualized representations. We find that instead of adopting various kinds of complex modules, it is more beneficial to efficiently capture the temporal contexts of entities. We experiment TARGCN on three benchmark datasets. Our model can achieve a more than 46% relative improvement on the GDELT dataset compared with state-of-the-art TKGC models. Meanwhile, it outperforms the strongest baseline on the ICEWS05-15 dataset with around 18% fewer parameters.
Introduction
\label{Introduction}
A \textit{Knowledge Graph} (KG) is a graph-structured \textit{Knowledge Base} (KB) that stores relational facts. KGs have drawn increasing research interest since they serve as key drivers for a wide range of downstream tasks in artificial intelligence, e.g., question answering <|cite_start|> (Reference: Forecasting Question Answering over Temporal Knowledge Graphs: Question answering over temporal knowledge graphs (TKGQA) has recently found increasing interest. TKGQA requires temporal reasoning techniques to extract the relevant information from temporal knowledge bases. The only existing TKGQA dataset, i.e., C RON Q UESTIONS , consists of temporal questions based on the facts from a fixed time period, where a temporal knowledge graph (TKG) spanning the same period can be fully used for answer inference, allowing the TKGQA models to use even the future knowledge to answer the questions based on the past facts. In real-world scenarios, however, it is also common that given the knowledge until now, we wish the TKGQA systems to answer the questions asking about the future. As humans constantly seek plans for the future, building TKGQA systems for answering such forecasting questions is important. Nevertheless, this has still been unexplored in previous research. In this paper, we propose a novel task: forecasting question answering over temporal knowledge graphs. We also propose a large-scale TKGQA benchmark dataset, i.e., F ORECAST TKGQ UESTIONS , for this task. It includes three types of questions, i.e., entity prediction, yes-no, and fact reasoning questions. For every forecasting question in our dataset, QA models can only have access to the TKG information before the timestamp annotated in the given question for answer inference. We find that the state-of-the-art TKGQA methods perform poorly on forecasting questions, and they are unable to answer yes-no questions and fact reasoning questions. To this end, we propose F ORECAST TKGQA, a TKGQA model that employs a TKG forecasting module for future inference, to answer all three types of questions. Experimental results show that F ORECAST TKGQA outperforms recent TKGQA methods on the entity prediction questions, and it also shows great effectiveness in answering the other two types of questions.) <|cite_end|>, commonsense reasoning <|cite_start|> (Reference: KM-BART: Knowledge Enhanced Multimodal BART for Visual Commonsense Generation: We present Knowledge Enhanced Multimodal BART (KM-BART), which is a Transformer-based sequence-to-sequence model capable of reasoning about commonsense knowledge from multimodal inputs of images and texts. We adapt the generative BART architecture (Lewis et al., 2020) to a multimodal model with visual and textual inputs. We further develop novel pretraining tasks to improve the model performance on the Visual Commonsense Generation (VCG) task. In particular, our pretraining task of Knowledge-based Commonsense Generation (KCG) boosts model performance on the VCG task by leveraging commonsense knowledge from a large language model pretrained on external commonsense knowledge graphs. To the best of our knowledge, we are the first to propose a dedicated task for improving model performance on the VCG task. Experimental results show that our model reaches state-of-the-art performance on the VCG task (Park et al., 2020) by applying these novel pretraining tasks.) <|cite_end|>, and recommender systems <|cite_start|> (Reference: Explainable Reasoning over Knowledge Graphs for Recommendation: Incorporating knowledge graph into recommender systems has attracted increasing attention in recent years. By exploring the interlinks within a knowledge graph, the connectivity between users and items can be discovered as paths, which provide rich and complementary information to user-item interactions. Such connectivity not only reveals the semantics of entities and relations, but also helps to comprehend a user's interest. However, existing efforts have not fully explored this connectivity to infer user preferences, especially in terms of modeling the sequential dependencies within and holistic semantics of a path. In this paper, we contribute a new model named Knowledge-aware Path Recurrent Network (KPRN) to exploit knowledge graph for recommendation. KPRN can generate path representations by composing the semantics of both entities and relations. By leveraging the sequential dependencies within a path, we allow effective reasoning on paths to infer the underlying rationale of a user-item interaction. Furthermore, we design a new weighted pooling operation to discriminate the strengths of different paths in connecting a user with an item, endowing our model with a certain level of explainability. We conduct extensive experiments on two datasets about movie and music, demonstrating significant improvements over state-of-the-art solutions Collaborative Knowledge Base Embedding and Neural Factorization Machine.) <|cite_end|>. A fact in a KG is described as a triplet $(s,r,o)$, e.g., (\textit{Joe Biden}, \textit{is president of}, \textit{USA}), where $s$, $o$, $r$ denote the subject entity, the object entity, and the relation between $s$ and $o$. While KGs contain rich knowledge about entities and the relational information among them, they do not consider the nature of ever-evolving relational facts over time. For example, consider a KG triplet (\textit{Donald Trump}, \textit{is president of}, \textit{USA}). According to world knowledge, this triplet is valid only before \textit{Joe Biden} took the place of \textit{Donald Trump} as the president of the \textit{USA}. This implies a shortcoming of KGs and calls for the introduction of \textit{Temporal Knowledge Graphs} (TKGs). In TKGs, every fact is augmented with a specific timestamp $t$ such that it can be described with a quadruple $(s,r,o,t)$. In this way, every fact in TKGs has its own time validity and this enables TKGs to capture the factual information in a time-varying context.
\textit{Temporal Knowledge Graph Completion} (TKGC) is a task aiming to infer the missing facts in TKGs. There exist two lines of TKGC methods. ($1$) A lot of prior methods attempt to incorporate temporal information into the existing KG reasoning scoring models and build novel time-aware score functions for TKGs <|cite_start|> (Reference: Deriving validity time in knowledge graph: Knowledge Graphs (KGs) are a popular means to represent knowledge on the Web, typically in the form of node/edge labelled directed graphs. We consider temporal KGs, in which edges are further annotated with time intervals, reflecting when the relationship between entities held in time. In this paper, we focus on the task of predicting time validity for unannotated edges. We introduce the problem as a variation of relational embedding. We adapt existing approaches, and explore the importance example selection and the incorporation of side information in the learning process. We present our experimental evaluation in details.) <|cite_end|> <|cite_start|> (Reference: Learning Sequence Encoders for Temporal Knowledge Graph Completion: Research on link prediction in knowledge graphs has mainly focused on static multi-relational data. In this work we consider temporal knowledge graphs where relations between entities may only hold for a time interval or a specific point in time. In line with previous work on static knowledge graphs, we propose to address this problem by learning latent entity and relation type representations. To incorporate temporal information, we utilize recurrent neural networks to learn time-aware representations of relation types which can be used in conjunction with existing latent factorization methods. The proposed approach is shown to be robust to common challenges in real-world KGs: the sparsity and heterogeneity of temporal expressions. Experiments show the benefits of our approach on four temporal KGs. The data sets are available under a permissive BSD-3 license 1.) <|cite_end|> <|cite_start|> (Reference: Embedding Models for Episodic Knowledge Graphs: In recent years a number of large-scale triple-oriented knowledge graphs have been generated and various models have been proposed to perform learning in those graphs. Most knowledge graphs are static and reflect the world in its current state. In reality, of course, the state of the world is changing: a healthy person becomes diagnosed with a disease and a new president is inaugurated. In this paper, we extend models for static knowledge graphs to temporal knowledge graphs. This enables us to store episodic data and to generalize to new facts (inductive learning). We generalize leading learning models for static knowledge graphs (i.e., Tucker, RESCAL, HolE, ComplEx, DistMult) to temporal knowledge graphs. In particular, we introduce a new tensor model, ConT, with superior generalization performance. The performances of all proposed models are analyzed on two different datasets: the Global Database of Events, Language, and Tone (GDELT) and the database for Integrated Conflict Early Warning System (ICEWS). We argue that temporal knowledge graph embeddings might be models also for cognitive episodic memory (facts we remember and can recollect) and that a semantic memory (current facts we know) can be generated from episodic memory by a marginalization operation. We validate this episodic-to-semantic projection hypothesis with the ICEWS dataset.) <|cite_end|> <|cite_start|> (Reference: Tensor Decompositions for temporal knowledge base completion: Most algorithms for representation learning and link prediction in relational data have been designed for static data. However, the data they are applied to usually evolves with time, such as friend graphs in social networks or user interactions with items in recommender systems. This is also the case for knowledge bases, which contain facts such as (US, has president, B. Obama, [2009-2017]) that are valid only at certain points in time. For the problem of link prediction under temporal constraints, i.e., answering queries such as (US, has president, ?, 2012), we propose a solution inspired by the canonical decomposition of tensors of order 4. We introduce new regularization schemes and present an extension of ComplEx (Trouillon et al., 2016) that achieves state-of-the-art performance. Additionally, we propose a new dataset for knowledge base completion constructed from Wikidata, larger than previous benchmarks by an order of magnitude, as a new reference for evaluating temporal and non-temporal link prediction methods.) <|cite_end|> <|cite_start|> (Reference: Temporal Knowledge Graph Completion using Box Embeddings: Knowledge graph completion is the task of inferring missing facts based on existing data in a knowledge graph. Temporal knowledge graph completion (TKGC) is an extension of this task to temporal knowledge graphs, where each fact is additionally associated with a time stamp. Current approaches for TKGC primarily build on existing embedding models which are developed for (static) knowledge graph completion, and extend these models to incorporate time, where the idea is to learn latent representations for entities, relations, and timestamps and then use the learned representations to predict missing facts at various time steps. In this paper, we propose BoxTE, a box embedding model for TKGC, building on the static knowledge graph embedding model BoxE. We show that BoxTE is fully expressive, and possesses strong inductive capacity in the temporal setting. We then empirically evaluate our model and show that it achieves state-of-the-art results on several TKGC benchmarks.) <|cite_end|>.
($2$) Another line of work takes advantage of neural structures, e.g., \textit{Graph Neural Networks} (GNNs) <|cite_start|> (Reference: Learning Convolutional Neural Networks for Graphs: Numerous important problems can be framed as learning from graph data. We propose a framework for learning convolutional neural networks for arbitrary graphs. These graphs may be undirected, directed, and with both discrete and continuous node and edge attributes. Analogous to image-based convolutional networks that operate on locally connected regions of the input, we present a general approach to extracting locally connected regions from graphs. Using established benchmark data sets, we demonstrate that the learned feature representations are competitive with state of the art graph kernels and that their computation is highly efficient.) <|cite_end|> <|cite_start|> (Reference: Semi-Supervised Classification with Graph Convolutional Networks: We present a scalable approach for semi-supervised learning on graph-structured data that is based on an efficient variant of convolutional neural networks which operate directly on graphs. We motivate the choice of our convolutional architecture via a localized first-order approximation of spectral graph convolutions. Our model scales linearly in the number of graph edges and learns hidden layer representations that encode both local graph structure and features of nodes. In a number of experiments on citation networks and on a knowledge graph dataset we demonstrate that our approach outperforms related methods by a significant margin.) <|cite_end|> and recurrent models, for modeling the temporal information in TKGC <|cite_start|> (Reference: TeMP: Temporal Message Passing for Temporal Knowledge Graph Completion: Inferring missing facts in temporal knowledge graphs (TKGs) is a fundamental and challenging task. Previous works have approached this problem by augmenting methods for static knowledge graphs to leverage time-dependent representations. However, these methods do not explicitly leverage multi-hop structural information and temporal facts from recent time steps to enhance their predictions. Additionally, prior work does not explicitly address the temporal sparsity and variability of entity distributions in TKGs. We propose the Temporal Message Passing (TeMP) framework to address these challenges by combining graph neural networks, temporal dynamics models, data imputation and frequency-based gating techniques. Experiments on standard TKG tasks show that our approach provides substantial gains compared to the previous state of the art, achieving a 10.7% average relative improvement in Hits@10 across three standard benchmarks. Our analysis also reveals important sources of variability both within and across TKG datasets, and we introduce several simple but strong baselines that outperform the prior state of the art in certain settings.) <|cite_end|> <|cite_start|> (Reference: Learning to Walk across Time for Interpretable Temporal Knowledge
Graph Completion: Static knowledge graphs (KGs), despite their wide usage in relational reasoning and downstream tasks, fall short of realistic modeling of knowledge and facts that are only temporarily valid. Compared to static knowledge graphs, temporal knowledge graphs (TKGs) inherently reflect the transient nature of real-world knowledge. Naturally, automatic TKG completion has drawn much research interests for a more realistic modeling of relational reasoning. However, most of the existing models for TKG completion extend static KG embeddings that do not fully exploit TKG structure, thus lacking in 1) accounting for temporally relevant events already residing in the local neighborhood of a query, and 2) path-based inference that facilitates multi-hop reasoning and better interpretability. In this paper, we propose T-GAP, a novel model for TKG completion that maximally utilizes both temporal information and graph structure in its encoder and decoder. T-GAP encodes query-specific substructure of TKG by focusing on the temporal displacement between each event and the query timestamp, and performs path-based inference by propagating attention through the graph. Our empirical experiments demonstrate that T-GAP not only achieves superior performance against state-of-the-art baselines, but also competently generalizes to queries with unseen timestamps. Through extensive qualitative analyses, we also show that T-GAP enjoys transparent interpretability, and follows human intuition in its reasoning process.) <|cite_end|>. Experimental results show that neural structures help to achieve state-of-the-art performance on the TKGC task. However, employing additional neural structures on top of the existing KG score functions normally leads to a higher number of model parameters. The parameter consumption increases even more when these models are equipped with advanced deep learning modules, e.g., attention mechanisms and Transformers <|cite_start|> (Reference: Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.) <|cite_end|>, thus causing high memory consumption and bringing a heavier burden for parameter optimization.
In this paper, we follow the trend of the second line of methods, aiming to design a neural-based graph encoder for TKGC that helps to cut the parameter consumption and the model complexity while maintaining superior model performance. We propose a time-aware relational graph encoder: \textit{\textbf{T}ime-\textbf{a}ware \textbf{R}elational \textbf{G}raph \textbf{C}onvolutional \textbf{N}etwork} (TARGCN). We find that our light-weighted time-aware relational graph encoder performs well on the TKGC task, and it requires relatively few parameters. The contribution of our work can be summarized as follows:
(i) We propose a time-aware relational graph encoder, i.e., TARGCN, for the TKGC task. TARGCN learns an entity's time-aware representation by sampling a temporal neighboring graph which consists of extensive temporal neighbors, and encodes temporal information by modeling time differences with a functional time encoder.
(ii) To test the robustness of TKGC models on irregular timestamped data, we propose a new dataset ICEWS14-irregular. TARGCN achieves superior performance on it compared with several recently proposed TKGC methods. Besides, TARGCN outperforms previous methods with a huge margin in predicting the links at unseen timestamps, which also shows its strong robustness.
(iii) TARGCN serves as a parameter-efficient model. To achieve the same performance, it requires much fewer parameters compared with two recently proposed neural-based TKG reasoning models, TeMP <|cite_start|> (Reference: TeMP: Temporal Message Passing for Temporal Knowledge Graph Completion: Inferring missing facts in temporal knowledge graphs (TKGs) is a fundamental and challenging task. Previous works have approached this problem by augmenting methods for static knowledge graphs to leverage time-dependent representations. However, these methods do not explicitly leverage multi-hop structural information and temporal facts from recent time steps to enhance their predictions. Additionally, prior work does not explicitly address the temporal sparsity and variability of entity distributions in TKGs. We propose the Temporal Message Passing (TeMP) framework to address these challenges by combining graph neural networks, temporal dynamics models, data imputation and frequency-based gating techniques. Experiments on standard TKG tasks show that our approach provides substantial gains compared to the previous state of the art, achieving a 10.7% average relative improvement in Hits@10 across three standard benchmarks. Our analysis also reveals important sources of variability both within and across TKG datasets, and we introduce several simple but strong baselines that outperform the prior state of the art in certain settings.) <|cite_end|> and T-GAP <|cite_start|> (Reference: Learning to Walk across Time for Interpretable Temporal Knowledge
Graph Completion: Static knowledge graphs (KGs), despite their wide usage in relational reasoning and downstream tasks, fall short of realistic modeling of knowledge and facts that are only temporarily valid. Compared to static knowledge graphs, temporal knowledge graphs (TKGs) inherently reflect the transient nature of real-world knowledge. Naturally, automatic TKG completion has drawn much research interests for a more realistic modeling of relational reasoning. However, most of the existing models for TKG completion extend static KG embeddings that do not fully exploit TKG structure, thus lacking in 1) accounting for temporally relevant events already residing in the local neighborhood of a query, and 2) path-based inference that facilitates multi-hop reasoning and better interpretability. In this paper, we propose T-GAP, a novel model for TKG completion that maximally utilizes both temporal information and graph structure in its encoder and decoder. T-GAP encodes query-specific substructure of TKG by focusing on the temporal displacement between each event and the query timestamp, and performs path-based inference by propagating attention through the graph. Our empirical experiments demonstrate that T-GAP not only achieves superior performance against state-of-the-art baselines, but also competently generalizes to queries with unseen timestamps. Through extensive qualitative analyses, we also show that T-GAP enjoys transparent interpretability, and follows human intuition in its reasoning process.) <|cite_end|>.
(iv) We evaluate TARGCN on three benchmark TKGC datasets. It achieves superior performance on all datasets. On the GDELT dataset, it achieves a more than 46\% relative improvement compared with the best baseline. <|paper_end|> | [
"<|reference_start|> TeMP: Temporal Message Passing for Temporal Knowledge Graph Completion: Inferring missing facts in temporal knowledge graphs (TKGs) is a fundamental and challenging task. Previous works have approached this problem by augmenting methods for static knowledge graphs to leverage time-dependent representations. However, these methods do not explicitly leverage multi-hop structural information and temporal facts from recent time steps to enhance their predictions. Additionally, prior work does not explicitly address the temporal sparsity and variability of entity distributions in TKGs. We propose the Temporal Message Passing (TeMP) framework to address these challenges by combining graph neural networks, temporal dynamics models, data imputation and frequency-based gating techniques. Experiments on standard TKG tasks show that our approach provides substantial gains compared to the previous state of the art, achieving a 10.7% average relative improvement in Hits@10 across three standard benchmarks. Our analysis also reveals important sources of variability both within and across TKG datasets, and we introduce several simple but strong baselines that outperform the prior state of the art in certain settings. <|reference_end|>",
"<|reference_start|> Attention Is All You Need: The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data. <|reference_end|>",
"<|reference_start|> TeMP: Temporal Message Passing for Temporal Knowledge Graph Completion: Inferring missing facts in temporal knowledge graphs (TKGs) is a fundamental and challenging task. Previous works have approached this problem by augmenting methods for static knowledge graphs to leverage time-dependent representations. However, these methods do not explicitly leverage multi-hop structural information and temporal facts from recent time steps to enhance their predictions. Additionally, prior work does not explicitly address the temporal sparsity and variability of entity distributions in TKGs. We propose the Temporal Message Passing (TeMP) framework to address these challenges by combining graph neural networks, temporal dynamics models, data imputation and frequency-based gating techniques. Experiments on standard TKG tasks show that our approach provides substantial gains compared to the previous state of the art, achieving a 10.7% average relative improvement in Hits@10 across three standard benchmarks. Our analysis also reveals important sources of variability both within and across TKG datasets, and we introduce several simple but strong baselines that outperform the prior state of the art in certain settings. <|reference_end|>",
"<|reference_start|> Learning to Walk across Time for Interpretable Temporal Knowledge\nGraph Completion: Static knowledge graphs (KGs), despite their wide usage in relational reasoning and downstream tasks, fall short of realistic modeling of knowledge and facts that are only temporarily valid. Compared to static knowledge graphs, temporal knowledge graphs (TKGs) inherently reflect the transient nature of real-world knowledge. Naturally, automatic TKG completion has drawn much research interests for a more realistic modeling of relational reasoning. However, most of the existing models for TKG completion extend static KG embeddings that do not fully exploit TKG structure, thus lacking in 1) accounting for temporally relevant events already residing in the local neighborhood of a query, and 2) path-based inference that facilitates multi-hop reasoning and better interpretability. In this paper, we propose T-GAP, a novel model for TKG completion that maximally utilizes both temporal information and graph structure in its encoder and decoder. T-GAP encodes query-specific substructure of TKG by focusing on the temporal displacement between each event and the query timestamp, and performs path-based inference by propagating attention through the graph. Our empirical experiments demonstrate that T-GAP not only achieves superior performance against state-of-the-art baselines, but also competently generalizes to queries with unseen timestamps. Through extensive qualitative analyses, we also show that T-GAP enjoys transparent interpretability, and follows human intuition in its reasoning process. <|reference_end|>"
] | [
10,
12,
13,
14
] | {"<|cite_1|>": "ss-2488290", "<|cite_2|>": "ss-2441547", "<|cite_3|>": "arxiv-179994", "<|multi_cite_4_1|>": "ss-1542718", "<|multi_cite_4_2|>": "arxiv-172022", "<|multi_cite_4_3|>": "arxiv-164318", "<|multi_cite_4_4|>": "arxiv-258648", "<|multi_cite_4_5|>": "arxiv-367983", "<|multi_cite_5_1|>": "arxiv-98116", "<|multi_cite_5_2|>": "arxiv-105493", "<|multi_cite_6_1|>": "arxiv-294620", "<|multi_cite_6_2|>": "ss-2283203", "<|cite_7|>": "arxiv-126595", "<|cite_8|>": "arxiv-294620", "<|cite_9|>": "ss-2283203"} |
2004.14185-1 | <|cite_start|> (Reference: Electrophysiological signatures of resting state networks in the human brain: Functional neuroimaging and electrophysiological studies have documented a dynamic baseline of intrinsic (not stimulus- or task-evoked) brain activity during resting wakefulness. This baseline is characterized by slow (<0.1 Hz) fluctuations of functional imaging signals that are topographically organized in discrete brain networks, and by much faster (1–80 Hz) electrical oscillations. To investigate the relationship between hemodynamic and electrical oscillations, we have adopted a completely data-driven approach that combines information from simultaneous electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). Using independent component analysis on the fMRI data, we identified six widely distributed resting state networks. The blood oxygenation level-dependent signal fluctuations associated with each network were correlated with the EEG power variations of delta, theta, alpha, beta, and gamma rhythms. Each functional network was characterized by a specific electrophysiological signature that involved the combination of different brain rhythms. Moreover, the joint EEG/fMRI analysis afforded a finer physiological fractionation of brain networks in the resting human brain. This result supports for the first time in humans the coalescence of several brain rhythms within large-scale brain networks as suggested by biophysical studies.) <|cite_end|> <|cite_start|> (Reference: What can be found in scalp EEG spectrum beyond common frequency bands. EEG–fMRI study: Objective. The scalp EEG spectrum is a frequently used marker of neural activity. Commonly, the preprocessing of EEG utilizes constraints, e.g. dealing with a predefined subset of electrodes or a predefined frequency band of interest. Such treatment of the EEG spectrum neglects the fact that particular neural processes may be reflected in several frequency bands and/or several electrodes concurrently, and can overlook the complexity of the structure of the EEG spectrum. Approach. We showed that the EEG spectrum structure can be described by parallel factor analysis (PARAFAC), a method which blindly uncovers the spatial–temporal–spectral patterns of EEG. We used an algorithm based on variational Bayesian statistics to reveal nine patterns from the EEG of 38 healthy subjects, acquired during a semantic decision task. The patterns reflected neural activity synchronized across theta, alpha, beta and gamma bands and spread over many electrodes, as well as various EEG artifacts. Main results. Specifically, one of the patterns showed significant correlation with the stimuli timing. The correlation was higher when compared to commonly used models of neural activity (power fluctuations in distinct frequency band averaged across a subset of electrodes) and we found significantly correlated hemodynamic fluctuations in simultaneously acquired fMRI data in regions known to be involved in speech processing. Further, we show that the pattern also occurs in EEG data which were acquired outside the MR machine. Two other patterns reflected brain rhythms linked to the attentional and basal ganglia large scale networks. The other patterns were related to various EEG artifacts. Significance. These results show that PARAFAC blindly identifies neural activity in the EEG spectrum and that it naturally handles the correlations among frequency bands and electrodes. We conclude that PARAFAC seems to be a powerful tool for analysis of the EEG spectrum and might bring novel insight to the relationships between EEG activity and brain hemodynamics.) <|cite_end|>. Blind Source Separation (BSS) techniques consider EEG and/or fMRI data to be a superposition of several `sources' of physiological activity and nonphysiological influences. Based on the observed data alone, BSS techniques are used to estimate both the sources and the mixing system, by means of a factorization of the data into two (or more) factor matrices, holding sources or mixing profiles along the columns. They naturally allow a symmetrical treatment of EEG and fMRI data, enabling true fusion of both modalities <|cite_start|> (Reference: Model driven EEG/fMRI fusion of brain oscillations: This article reviews progress and challenges in model driven EEG/fMRI fusion with a focus on brain oscillations. Fusion is the combination of both imaging modalities based on a cascade of forward models from ensemble of post‐synaptic potentials (ePSP) to net primary current densities (nPCD) to EEG; and from ePSP to vasomotor feed forward signal (VFFSS) to BOLD. In absence of a model, data driven fusion creates maps of correlations between EEG and BOLD or between estimates of nPCD and VFFS. A consistent finding has been that of positive correlations between EEG alpha power and BOLD in both frontal cortices and thalamus and of negative ones for the occipital region. For model driven fusion we formulate a neural mass EEG/fMRI model coupled to a metabolic hemodynamic model. For exploratory simulations we show that the Local Linearization (LL) method for integrating stochastic differential equations is appropriate for highly nonlinear dynamics. It has been successfully applied to small and medium sized networks, reproducing the described EEG/BOLD correlations. A new LL‐algebraic method allows simulations with hundreds of thousands of neural populations, with connectivities and conduction delays estimated from diffusion weighted MRI. For parameter and state estimation, Kalman filtering combined with the LL method estimates the innovations or prediction errors. From these the likelihood of models given data are obtained. The LL‐innovation estimation method has been already applied to small and medium scale models. With improved Bayesian computations the practical estimation of very large scale EEG/fMRI models shall soon be possible. Hum Brain Mapp, 2009. © 2008 Wiley‐Liss, Inc.) <|cite_end|> <|cite_start|> (Reference: {Multimodal data fusion: an overview of methods, challenges, and prospects: In various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, in multiple experiments or subjects, among others. We use the term “modality” for each such acquisition framework. Due to the rich characteristics of natural phenomena, it is rare that a single modality provides complete knowledge of the phenomenon of interest. The increasing availability of several modalities reporting on the same system introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. As we argue, many of these questions, or “challenges,” are common to multiple domains. This paper deals with two key issues: “why we need data fusion” and “how we perform it.” The first issue is motivated by numerous examples in science and technology, followed by a mathematical framework that showcases some of the benefits that data fusion provides. In order to address the second issue, “diversity” is introduced as a key concept, and a number of data-driven solutions based on matrix and tensor decompositions are discussed, emphasizing how they account for diversity across the data sets. The aim of this paper is to provide the reader, regardless of his or her community of origin, with a taste of the vastness of the field, the prospects, and the opportunities that it holds.) <|cite_end|> <|cite_start|> (Reference: A review of group ICA for fMRI data and ICA for joint inference of imaging, genetic, and ERP data: ) <|cite_end|>, which is in contrast to EEG-correlated fMRI, where EEG-derived IEDs inform the fMRI analysis. Furthermore, BSS techniques naturally accommodate higher-order representations of the data in the form of tensors or multiway arrays, which can capture the rich structure in the data. Indeed, measurements of brain activity inherently vary along several modes (subjects, EEG channels, frequency, time, ...), which cannot be represented using matrix-based techniques like ICA without loss of structure or information <|cite_start|> (Reference: Tensor Decomposition for Signal Processing and Machine Learning: Tensors or {\em multi-way arrays} are functions of three or more indices $(i,j,k,\cdots)$ -- similar to matrices (two-way arrays), which are functions of two indices $(r,c)$ for (row,column). Tensors have a rich history, stretching over almost a century, and touching upon numerous disciplines; but they have only recently become ubiquitous in signal and data analytics at the confluence of signal processing, statistics, data mining and machine learning. This overview article aims to provide a good starting point for researchers and practitioners interested in learning about and working with tensors. As such, it focuses on fundamentals and motivation (using various application examples), aiming to strike an appropriate balance of breadth {\em and depth} that will enable someone having taken first graduate courses in matrix algebra and probability to get started doing research and/or developing tensor algorithms and software. Some background in applied optimization is useful but not strictly required. The material covered includes tensor rank and rank decomposition; basic tensor factorization models and their relationships and properties (including fairly good coverage of identifiability); broad coverage of algorithms ranging from alternating optimization to stochastic gradient; statistical performance analysis; and applications ranging from source separation to collaborative filtering, mixture and topic modeling, classification, and multilinear subspace learning.) <|cite_end|> <|cite_start|> (Reference: {Multimodal data fusion: an overview of methods, challenges, and prospects: In various disciplines, information about the same phenomenon can be acquired from different types of detectors, at different conditions, in multiple experiments or subjects, among others. We use the term “modality” for each such acquisition framework. Due to the rich characteristics of natural phenomena, it is rare that a single modality provides complete knowledge of the phenomenon of interest. The increasing availability of several modalities reporting on the same system introduces new degrees of freedom, which raise questions beyond those related to exploiting each modality separately. As we argue, many of these questions, or “challenges,” are common to multiple domains. This paper deals with two key issues: “why we need data fusion” and “how we perform it.” The first issue is motivated by numerous examples in science and technology, followed by a mathematical framework that showcases some of the benefits that data fusion provides. In order to address the second issue, “diversity” is introduced as a key concept, and a number of data-driven solutions based on matrix and tensor decompositions are discussed, emphasizing how they account for diversity across the data sets. The aim of this paper is to provide the reader, regardless of his or her community of origin, with a taste of the vastness of the field, the prospects, and the opportunities that it holds.) <|cite_end|> <|cite_start|> (Reference: {Tensor Decompositions and Applications: This survey provides an overview of higher-order tensor decompositions, their applications, and available software. A tensor is a multidimensional or $N$-way array. Decompositions of higher-order tensors (i.e., $N$-way arrays with $N \geq 3$) have applications in psycho-metrics, chemometrics, signal processing, numerical linear algebra, computer vision, numerical analysis, data mining, neuroscience, graph analysis, and elsewhere. Two particular tensor decompositions can be considered to be higher-order extensions of the matrix singular value decomposition: CANDECOMP/PARAFAC (CP) decomposes a tensor as a sum of rank-one tensors, and the Tucker decomposition is a higher-order form of principal component analysis. There are many other tensor decompositions, including INDSCAL, PARAFAC2, CANDELINC, DEDICOM, and PARATUCK2 as well as nonnegative variants of all of the above. The N-way Toolbox, Tensor Toolbox, and Multilinear Engine are examples of software packages for working with tensors.) <|cite_end|> <|cite_start|> (Reference: Multiway Analysis of epilepsy tensors: MOTIVATION
The success or failure of an epilepsy surgery depends greatly on the localization of epileptic focus (origin of a seizure). We address the problem of identification of a seizure origin through an analysis of ictal electroencephalogram (EEG), which is proven to be an effective standard in epileptic focus localization.
SUMMARY
With a goal of developing an automated and robust way of visual analysis of large amounts of EEG data, we propose a novel approach based on multiway models to study epilepsy seizure structure. Our contributions are 3-fold. First, we construct an Epilepsy Tensor with three modes, i.e. time samples, scales and electrodes, through wavelet analysis of multi-channel ictal EEG. Second, we demonstrate that multiway analysis techniques, in particular parallel factor analysis (PARAFAC), provide promising results in modeling the complex structure of an epilepsy seizure, localizing a seizure origin and extracting artifacts. Third, we introduce an approach for removing artifacts using multilinear subspace analysis and discuss its merits and drawbacks.
RESULTS
Ictal EEG analysis of 10 seizures from 7 patients are included in this study. Our results for 8 seizures match with clinical observations in terms of seizure origin and extracted artifacts. On the other hand, for 2 of the seizures, seizure localization is not achieved using an initial trial of PARAFAC modeling. In these cases, first, we apply an artifact removal method and subsequently apply the PARAFAC model on the epilepsy tensor from which potential artifacts have been removed. This method successfully identifies the seizure origin in both cases.) <|cite_end|>.
Tensor-based BSS techniques have been used to mine unimodal EEG data by decomposing third-order spectrograms \mbox{(channels $\times$ time points $\times$ wavelet scales)} into several `atoms' (also coined `components' or `sources'), each with a distinct spatial, temporal and spectral profile/signature <|cite_start|> (Reference: Decomposing EEG data into space–time–frequency components using Parallel Factor Analysis: ) <|cite_end|> <|cite_start|> (Reference: Parallel Factor Analysis as an exploratory tool for wavelet transformed event-related EEG: ) <|cite_end|> <|cite_start|> (Reference: What can be found in scalp EEG spectrum beyond common frequency bands. EEG–fMRI study: Objective. The scalp EEG spectrum is a frequently used marker of neural activity. Commonly, the preprocessing of EEG utilizes constraints, e.g. dealing with a predefined subset of electrodes or a predefined frequency band of interest. Such treatment of the EEG spectrum neglects the fact that particular neural processes may be reflected in several frequency bands and/or several electrodes concurrently, and can overlook the complexity of the structure of the EEG spectrum. Approach. We showed that the EEG spectrum structure can be described by parallel factor analysis (PARAFAC), a method which blindly uncovers the spatial–temporal–spectral patterns of EEG. We used an algorithm based on variational Bayesian statistics to reveal nine patterns from the EEG of 38 healthy subjects, acquired during a semantic decision task. The patterns reflected neural activity synchronized across theta, alpha, beta and gamma bands and spread over many electrodes, as well as various EEG artifacts. Main results. Specifically, one of the patterns showed significant correlation with the stimuli timing. The correlation was higher when compared to commonly used models of neural activity (power fluctuations in distinct frequency band averaged across a subset of electrodes) and we found significantly correlated hemodynamic fluctuations in simultaneously acquired fMRI data in regions known to be involved in speech processing. Further, we show that the pattern also occurs in EEG data which were acquired outside the MR machine. Two other patterns reflected brain rhythms linked to the attentional and basal ganglia large scale networks. The other patterns were related to various EEG artifacts. Significance. These results show that PARAFAC blindly identifies neural activity in the EEG spectrum and that it naturally handles the correlations among frequency bands and electrodes. We conclude that PARAFAC seems to be a powerful tool for analysis of the EEG spectrum and might bring novel insight to the relationships between EEG activity and brain hemodynamics.) <|cite_end|>, with successful application in seizure EEG analysis <|cite_start|> (Reference: Multiway Analysis of epilepsy tensors: MOTIVATION
The success or failure of an epilepsy surgery depends greatly on the localization of epileptic focus (origin of a seizure). We address the problem of identification of a seizure origin through an analysis of ictal electroencephalogram (EEG), which is proven to be an effective standard in epileptic focus localization.
SUMMARY
With a goal of developing an automated and robust way of visual analysis of large amounts of EEG data, we propose a novel approach based on multiway models to study epilepsy seizure structure. Our contributions are 3-fold. First, we construct an Epilepsy Tensor with three modes, i.e. time samples, scales and electrodes, through wavelet analysis of multi-channel ictal EEG. Second, we demonstrate that multiway analysis techniques, in particular parallel factor analysis (PARAFAC), provide promising results in modeling the complex structure of an epilepsy seizure, localizing a seizure origin and extracting artifacts. Third, we introduce an approach for removing artifacts using multilinear subspace analysis and discuss its merits and drawbacks.
RESULTS
Ictal EEG analysis of 10 seizures from 7 patients are included in this study. Our results for 8 seizures match with clinical observations in terms of seizure origin and extracted artifacts. On the other hand, for 2 of the seizures, seizure localization is not achieved using an initial trial of PARAFAC modeling. In these cases, first, we apply an artifact removal method and subsequently apply the PARAFAC model on the epilepsy tensor from which potential artifacts have been removed. This method successfully identifies the seizure origin in both cases.) <|cite_end|> <|cite_start|> (Reference: Canonical Decomposition of Ictal Scalp EEG and Accurate Source Localisation: Principles and Simulation Study: Long-term electroencephalographic (EEG) recordings are important in the presurgical evaluation of refractory partial epilepsy for the delineation of the ictal onset zones. In this paper, we introduce a new concept for an automatic, fast, and objective localisation of the ictal onset zone in ictal EEG recordings. Canonical decomposition of ictal EEG decomposes the EEG in atoms. One or more atoms are related to the seizure activity. A single dipole was then fitted to model the potential distribution of each epileptic atom. In this study, we performed a simulation study in order to estimate the dipole localisation error. Ictal dipole localisation was very accurate, even at low signal-to-noise ratios, was not affected by seizure activity frequency or frequency changes, and was minimally affected by the waveform and depth of the ictal onset zone location. Ictal dipole localisation error using 21 electrodes was around 10.0 mm and improved more than tenfold in the range of 0.5–1.0 mm using 148 channels. In conclusion, our simulation study of canonical decomposition of ictal scalp EEG allowed a robust and accurate localisation of the ictal onset zone.) <|cite_end|>. While a tensor extension of ICA for group fMRI data (in the form of \mbox{subjects $\times$ time points $\times$ voxels}) exists <|cite_start|> (Reference: Tensorial extensions of independent component analysis for multisubject FMRI analysis: ) <|cite_end|>, matrix representations of fMRI remain dominant for single-subject analyses.
Coupled BSS techniques can estimate components which are shared between both modalities, providing a characterization in both domains <|cite_start|> (Reference: Tensor decompositions and data fusion in epileptic electroencephalography and functional magnetic resonance imaging data: Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) record a mixture of ongoing neural processes, physiological and nonphysiological noise. The pattern of interest, such as epileptic activity, is often hidden within this noisy mixture. Therefore, blind source separation (BSS) techniques, which can retrieve the activity pattern of each underlying source, are very useful. Tensor decomposition techniques are very well suited to solve the BSS problem, as they provide a unique solution under mild constraints. Uniqueness is crucial for an unambiguous interpretation of the components, matching them to true neural processes and characterizing them using the component signatures. Moreover, tensors provide a natural representation of the inherently multidimensional EEG and fMRI, and preserve the structural information defined by the interdependencies among the various modes such as channels, time, patients, etc. Despite the well‐developed theoretical framework, tensor‐based analysis of real, large‐scale clinical datasets is still scarce. Indeed, the application of tensor methods is not straightforward. Finding an appropriate tensor representation, suitable tensor model, and interpretation are application dependent choices, which require expertise both in neuroscience and in multilinear algebra. The aim of this paper is to provide a general guideline for these choices and illustrate them through successful applications in epilepsy. WIREs Data Mining Knowl Discov 2017, 7:e1197. doi: 10.1002/widm.1197) <|cite_end|>. For example, in <|cite_start|> (Reference: ACMTF for fusion of multi-modal neuroimaging data and identification of biomarkers: Joint analysis of neuroimaging data from multiple modalities has the potential to improve our understanding of brain function since each modality provides complementary information. In this paper, we address the problem of jointly analyzing functional magnetic resonance imaging (fMRI), structural MRI (sMRI) and electroencephalography (EEG) data collected during an auditory oddball (AOD) task with the goal of capturing neural patterns that differ between patients with schizophrenia and healthy controls. Traditionally, fusion methods such as joint independent component analysis (jICA) have been used to jointly analyze such multi-modal neuroimaging data. However, previous jICA analyses typically analyze the EEG signal from a single electrode or concatenate signals from multiple electrodes, thus ignoring the potential multilinear structure of the EEG data, and models the data using a common mixing matrix for both modalities. In this paper, we arrange the multi-channel EEG signals as a third-order tensor with modes: subjects, time samples and electrodes, and jointly analyze the tensor with the fMRI and sMRI data, both in the form of subjects by voxels matrices, using a structure-revealing coupled matrix and tensor factorization (CMTF) model. Through this modeling approach, we (i) exploit the multilinear structure of multi-channel EEG data and (ii) capture weights for components indicative of the level of contribution from each modality. We compare the results of the structure-revealing CMTF model with those of jICA and demonstrate that, while both models capture significant distinguishing patterns between patients and controls, the structure-revealing CMTF model provides more robust activation.) <|cite_end|> <|cite_start|> (Reference: Unraveling Diagnostic Biomarkers of Schizophrenia through Structure-Revealing Fusion of Multi-Modal Neuroimaging Data: Fusing complementary information from different modalities can lead to the discovery of more accurate diagnostic biomarkers for psychiatric disorders. However, biomarker discovery through data fusion is challenging since it requires extracting interpretable and reproducible patterns from data sets, consisting of shared/unshared patterns and of different orders. For example, multi-channel electroencephalography (EEG) signals from multiple subjects can be represented as a third-order tensor with modes: subject, time, and channel, while functional magnetic resonance imaging (fMRI) data may be in the form of subject by voxel matrices. Traditional data fusion methods rearrange higher-order tensors, such as EEG, as matrices to use matrix factorization-based approaches. In contrast, fusion methods based on coupled matrix and tensor factorizations (CMTF) exploit the potential multi-way structure of higher-order tensors. The CMTF approach has been shown to capture underlying patterns more accurately without imposing strong constraints on the latent neural patterns, i.e., biomarkers. In this paper, EEG, fMRI and structural MRI (sMRI) data collected during an auditory oddball task (AOD) from a group of subjects consisting of patients with schizophrenia and healthy controls, are arranged as matrices and higher-order tensors coupled along the subject mode, and jointly analyzed using structure-revealing CMTF methods (also known as advanced CMTF (ACMTF)) focusing on unique identification of underlying patterns in the presence of shared/unshared patterns. We demonstrate that joint analysis of the EEG tensor and fMRI matrix using ACMTF reveals significant and biologically meaningful components in terms of differentiating between patients with schizophrenia and healthy controls while also providing spatial patterns with high resolution and improving the clustering performance compared to the analysis of only the EEG tensor. We also show that these patterns are reproducible, and study reproducibility for different model parameters. In comparison to the joint independent component analysis (jICA) data fusion approach, ACMTF provides easier interpretation of EEG data by revealing a single summary map of the topography for each component. Furthermore, fusion of sMRI data with EEG and fMRI through an ACMTF model provides structural patterns; however, we also show that when fusing data sets from multiple modalities, hence of very different nature, preprocessing plays a crucial role.) <|cite_end|> <|cite_start|> (Reference: Fusion of electroencephalography and functional magnetic resonance imaging to explore epileptic network activity: Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) are two complementary modalities capturing a mixture of various underlying neural sources. The fusion of these modalities promises the best of both worlds, i.e. a better resolution in time and space, respectively. Assuming that EEG and fMRI observations are generated by the same mixing system in both modalities, their fusion can be achieved by joint blind source separation (BSS). We solve the joint BSS problem using different variants of joint independent component analysis (jointICA) and coupled matrix-tensor factorization (CMTF). We demonstrate that EEG-fMRI fusion provides a detailed spatio-temporal characterization of an EEG-fMRI dataset recorded in epilepsy patients, leading to new insights in epileptic network behaviour.) <|cite_end|> <|cite_start|> (Reference: Fusion of EEG and fMRI via soft coupled tensor decompositions: Data fusion refers to the joint analysis of multiple datasets which provide complementary views of the same task. In this paper, the problem of jointly analyzing electroencephalography (EEG) and functional Magnetic Resonance Imaging (fMRI) data is considered. Analyzing both EEG and fMRI measurements is highly beneficial for studying brain function because these modalities have complementary spatiotemporal resolutions: EEG offers good temporal resolution while fMRI offers good spatial resolution. The fusion methods reported so far ignore the underlying multi-way nature of the data in at least one of the modalities and/or rely on very strong assumptions concerning the relation among the respective data sets. In this paper, these two points are addressed by adopting tensor models for both modalities and by following a soft coupling approach to implement the fused analysis. To cope with the subject variability in EEG, the PARAFAC2 model is adopted. The results obtained are compared against those of Parallel ICA and hard coupling alternatives in both simulated and real data. Our results confirm the superiority of tensorial methods over methods based on ICA. In scenarios that do not meet the assumptions underlying hard coupling, the advantage of soft coupled decompositions is clearly demonstrated.) <|cite_end|>, multi-subject EEG and fMRI data have been analyzed using coupled matrix-tensor factorization (CMTF), wherein the `subjects' factor is shared between the EEG trilinear tensor decomposition and the fMRI matrix decomposition. In <|cite_start|> (Reference: Fusion of electroencephalography and functional magnetic resonance imaging to explore epileptic network activity: Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) are two complementary modalities capturing a mixture of various underlying neural sources. The fusion of these modalities promises the best of both worlds, i.e. a better resolution in time and space, respectively. Assuming that EEG and fMRI observations are generated by the same mixing system in both modalities, their fusion can be achieved by joint blind source separation (BSS). We solve the joint BSS problem using different variants of joint independent component analysis (jointICA) and coupled matrix-tensor factorization (CMTF). We demonstrate that EEG-fMRI fusion provides a detailed spatio-temporal characterization of an EEG-fMRI dataset recorded in epilepsy patients, leading to new insights in epileptic network behaviour.) <|cite_end|>, the resulting factor signatures revealed onset and propagation zones of an interictal epileptic network that was common over patients, as well as the modulation of the default-mode network (DMN) activity.
Also single-subject data can be decomposed into distinct components,
using a shared temporal factor for EEG and fMRI. This requires the use of a model of the neurovascular coupling, to ensure temporal alignment of EEG and BOLD dynamics. In <|cite_start|> (Reference: Concurrent EEG/fMRI analysis by multiway Partial Least Squares: ) <|cite_end|>, a fixed canonical HRF was used, followed by multiway partial least squares to extract components with spatial, temporal, and spectral signatures. In previous work, we proposed an extension to this technique, where a subject-specific HRF is co-estimated from the available data, along with the components <|cite_start|> (Reference: Flexible Fusion of Electroencephalography and Functional Magnetic Resonance Imaging: Revealing Neural-Hemodynamic Coupling Through Structured Matrix-Tensor Factorization: Simultaneous recording of electroencephalographic (EEG) signals and functional magnetic resonance images (fMRI) has gained wide interest in brain research, thanks to the highly complementary spatiotemporal nature of both modalities. We propose a novel technique to extract sources of neural activity from the multimodal measurements, which relies on a structured form of coupled matrix-tensor factorization (CMTF). In a data-symmetric fashion, we characterize these underlying sources in the spatial, temporal and spectral domain, and estimate how the observations in EEG and fMRI are related through neurovascular coupling. That is, we explicitly account for the intrinsically variable nature of this coupling, allowing more accurate localization of the neural activity in time and space. We illustrate the effectiveness of this approach, which is shown to be robust to noise, by means of a simulation study. Hence, this provides a conceptually simple, yet effective alternative to other data-driven analysis methods in event-related or resting-state EEG-fMRI studies.) <|cite_end|>.
In this paper, we extend this latter technique in order to account not only for subject-wise variation of the HRF, but also capture variations over brain regions. This results in a highly structured CMTF (sCMTF) of the interictal multimodal data, in which HRF basis functions and spatial weighting coefficients are estimated along with spatial, spectral and temporal signatures of components. By preprocessing the EEG using the data-driven filters from <|cite_start|> (Reference: Semi-automated EEG Enhancement Improves Localization of Ictal Onset Zone With EEG-Correlated fMRI: Objective: To improve the accuracy of detecting the ictal onset zone, we propose to enhance the epilepsy-related activity present in the EEG signals, before mapping their BOLD correlates through EEG-correlated fMRI analysis. Methods: Based solely on a segmentation of interictal epileptic discharges (IEDs) on the EEG, we train multi-channel Wiener filters (MWF) which enhance IED-like waveforms, and suppress background activity and noisy influences. Subsequently, we use EEG-correlated fMRI to find the brain regions in which the BOLD signal fluctuation corresponds to the filtered signals' time-varying power (after convolving with the hemodynamic response function), and validate the identified regions by quantitatively comparing them to ground-truth maps of the (resected or hypothesized) ictal onset zone. We validate the performance of this novel predictor vs. that of commonly used unitary or power-weighted predictors and a recently introduced connectivity-based metric, on a cohort of 12 patients with refractory epilepsy. Results: The novel predictor, derived from the filtered EEG signals, allowed the detection of the ictal onset zone in a larger percentage of epileptic patients (92% vs. at most 83% for the other predictors), and with higher statistical significance, compared to existing predictors. At the same time, the new method maintains maximal specificity by not producing false positive activations in healthy controls. Significance: The findings of this study advocate for the use of the MWF to maximize the signal-to-noise ratio of IED-like events in the interictal EEG, and subsequently use time-varying power as a sensitive predictor of the BOLD signal, to localize the ictal onset zone.) <|cite_end|>, we aim to maximize the sensitivity in mapping the interictal discharges. We analyze whether the estimated spatial modulation of the HRF is a viable biomarker when localizing the ictal onset zone, besides the BOLD spatial signatures themselves. <|paper_end|> | [
"<|reference_start|> Tensor decompositions and data fusion in epileptic electroencephalography and functional magnetic resonance imaging data: Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) record a mixture of ongoing neural processes, physiological and nonphysiological noise. The pattern of interest, such as epileptic activity, is often hidden within this noisy mixture. Therefore, blind source separation (BSS) techniques, which can retrieve the activity pattern of each underlying source, are very useful. Tensor decomposition techniques are very well suited to solve the BSS problem, as they provide a unique solution under mild constraints. Uniqueness is crucial for an unambiguous interpretation of the components, matching them to true neural processes and characterizing them using the component signatures. Moreover, tensors provide a natural representation of the inherently multidimensional EEG and fMRI, and preserve the structural information defined by the interdependencies among the various modes such as channels, time, patients, etc. Despite the well‐developed theoretical framework, tensor‐based analysis of real, large‐scale clinical datasets is still scarce. Indeed, the application of tensor methods is not straightforward. Finding an appropriate tensor representation, suitable tensor model, and interpretation are application dependent choices, which require expertise both in neuroscience and in multilinear algebra. The aim of this paper is to provide a general guideline for these choices and illustrate them through successful applications in epilepsy. WIREs Data Mining Knowl Discov 2017, 7:e1197. doi: 10.1002/widm.1197 <|reference_end|>",
"<|reference_start|> Unraveling Diagnostic Biomarkers of Schizophrenia through Structure-Revealing Fusion of Multi-Modal Neuroimaging Data: Fusing complementary information from different modalities can lead to the discovery of more accurate diagnostic biomarkers for psychiatric disorders. However, biomarker discovery through data fusion is challenging since it requires extracting interpretable and reproducible patterns from data sets, consisting of shared/unshared patterns and of different orders. For example, multi-channel electroencephalography (EEG) signals from multiple subjects can be represented as a third-order tensor with modes: subject, time, and channel, while functional magnetic resonance imaging (fMRI) data may be in the form of subject by voxel matrices. Traditional data fusion methods rearrange higher-order tensors, such as EEG, as matrices to use matrix factorization-based approaches. In contrast, fusion methods based on coupled matrix and tensor factorizations (CMTF) exploit the potential multi-way structure of higher-order tensors. The CMTF approach has been shown to capture underlying patterns more accurately without imposing strong constraints on the latent neural patterns, i.e., biomarkers. In this paper, EEG, fMRI and structural MRI (sMRI) data collected during an auditory oddball task (AOD) from a group of subjects consisting of patients with schizophrenia and healthy controls, are arranged as matrices and higher-order tensors coupled along the subject mode, and jointly analyzed using structure-revealing CMTF methods (also known as advanced CMTF (ACMTF)) focusing on unique identification of underlying patterns in the presence of shared/unshared patterns. We demonstrate that joint analysis of the EEG tensor and fMRI matrix using ACMTF reveals significant and biologically meaningful components in terms of differentiating between patients with schizophrenia and healthy controls while also providing spatial patterns with high resolution and improving the clustering performance compared to the analysis of only the EEG tensor. We also show that these patterns are reproducible, and study reproducibility for different model parameters. In comparison to the joint independent component analysis (jICA) data fusion approach, ACMTF provides easier interpretation of EEG data by revealing a single summary map of the topography for each component. Furthermore, fusion of sMRI data with EEG and fMRI through an ACMTF model provides structural patterns; however, we also show that when fusing data sets from multiple modalities, hence of very different nature, preprocessing plays a crucial role. <|reference_end|>",
"<|reference_start|> Fusion of electroencephalography and functional magnetic resonance imaging to explore epileptic network activity: Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) are two complementary modalities capturing a mixture of various underlying neural sources. The fusion of these modalities promises the best of both worlds, i.e. a better resolution in time and space, respectively. Assuming that EEG and fMRI observations are generated by the same mixing system in both modalities, their fusion can be achieved by joint blind source separation (BSS). We solve the joint BSS problem using different variants of joint independent component analysis (jointICA) and coupled matrix-tensor factorization (CMTF). We demonstrate that EEG-fMRI fusion provides a detailed spatio-temporal characterization of an EEG-fMRI dataset recorded in epilepsy patients, leading to new insights in epileptic network behaviour. <|reference_end|>",
"<|reference_start|> Concurrent EEG/fMRI analysis by multiway Partial Least Squares: <|reference_end|>"
] | [
15,
17,
18,
21
] | {"<|cite_1|>": "ss-1769993", "<|multi_cite_2_1|>": "ss-1860239", "<|multi_cite_2_2|>": "ss-1409628", "<|multi_cite_2_3|>": "ss-1409628", "<|multi_cite_2_4|>": "ss-1943481", "<|multi_cite_2_5|>": "ss-1409628", "<|multi_cite_2_6|>": "ss-1943482", "<|multi_cite_2_7|>": "ss-1943483", "<|multi_cite_3_1|>": "ss-1943484", "<|multi_cite_3_2|>": "ss-1860239", "<|multi_cite_3_3|>": "ss-1409628", "<|multi_cite_3_4|>": "ss-1409628", "<|multi_cite_3_5|>": "ss-1943485", "<|cite_4|>": "ss-1943486", "<|cite_5|>": "ss-1267648", "<|cite_6|>": "ss-1025023", "<|multi_cite_7_1|>": "ss-1267648", "<|multi_cite_7_2|>": "ss-696845", "<|multi_cite_7_3|>": "ss-1190394", "<|multi_cite_8_2|>": "ss-1943487", "<|multi_cite_8_3|>": "ss-1409628", "<|multi_cite_8_4|>": "ss-1943488", "<|cite_9|>": "ss-1943488", "<|multi_cite_10_1|>": "ss-1860239", "<|multi_cite_10_2|>": "ss-1943486", "<|multi_cite_11_1|>": "ss-1943489", "<|multi_cite_11_2|>": "ss-1943490", "<|cite_12|>": "ss-1409628", "<|cite_13|>": "ss-1567487", "<|cite_14|>": "ss-1940107", "<|cite_15|>": "ss-1940108", "<|cite_16|>": "ss-1943491", "<|cite_17|>": "ss-1943492", "<|multi_cite_18_1|>": "ss-1409628", "<|multi_cite_18_2|>": "ss-1943493", "<|multi_cite_18_3|>": "ss-1943494", "<|multi_cite_18_4|>": "ss-1943495", "<|multi_cite_18_5|>": "ss-1943496", "<|multi_cite_19_1|>": "ss-1190394", "<|multi_cite_19_2|>": "ss-1943497", "<|multi_cite_19_3|>": "ss-1943498", "<|multi_cite_19_4|>": "ss-696845", "<|cite_20|>": "ss-1567487", "<|multi_cite_21_1|>": "ss-1943499", "<|multi_cite_21_2|>": "ss-1940107", "<|cite_22|>": "ss-1943500", "<|multi_cite_23_1|>": "ss-1943497", "<|multi_cite_23_2|>": "ss-696853", "<|cite_24|>": "ss-1943501", "<|multi_cite_25_1|>": "ss-1943502", "<|multi_cite_25_2|>": "ss-1943503", "<|cite_26|>": "ss-1943503", "<|cite_27|>": "ss-1409628", "<|multi_cite_28_1|>": "ss-1943493", "<|multi_cite_28_2|>": "ss-1943495", "<|multi_cite_28_3|>": "ss-1943504", "<|multi_cite_28_4|>": "ss-1943505", "<|multi_cite_28_5|>": "ss-1943494", "<|multi_cite_28_6|>": "ss-1943506", "<|multi_cite_28_7|>": "ss-1409628", "<|cite_29|>": "ss-1943507", "<|cite_30|>": "ss-1943508", "<|cite_31|>": "ss-1943494", "<|multi_cite_32_1|>": "ss-1190394", "<|multi_cite_32_2|>": "ss-1940108", "<|multi_cite_32_3|>": "ss-696845", "<|multi_cite_33_1|>": "ss-2327373", "<|multi_cite_33_2|>": "ss-1943509", "<|multi_cite_34_1|>": "ss-1943510", "<|multi_cite_34_2|>": "ss-1238004", "<|multi_cite_34_3|>": "ss-696856", "<|multi_cite_35_1|>": "arxiv-101588", "<|multi_cite_35_2|>": "ss-1238004", "<|multi_cite_35_3|>": "ss-1356700", "<|multi_cite_35_4|>": "ss-696861", "<|multi_cite_36_1|>": "ss-853984", "<|multi_cite_36_2|>": "ss-853979", "<|multi_cite_36_3|>": "ss-1943509", "<|multi_cite_37_1|>": "ss-696861", "<|multi_cite_37_2|>": "ss-1943511", "<|cite_38|>": "ss-853985", "<|cite_39|>": "ss-1127517", "<|multi_cite_40_1|>": "ss-1943512", "<|multi_cite_40_2|>": "ss-1358860", "<|multi_cite_40_3|>": "ss-1943513", "<|multi_cite_40_4|>": "ss-1685206", "<|cite_41|>": "ss-1943513", "<|cite_42|>": "ss-853983", "<|cite_43|>": "ss-696853", "<|cite_44|>": "ss-1943488"} |
1410.0600 | <|paper_start|> Title: Cell Stores
Abstract: Cell Stores: Cell stores provide a relational-like, tabular level of abstraction to business users while leveraging recent database technologies, such as key-value stores and document stores. This allows to scale up and out the efficient storage and retrieval of highly dimensional data. Cells are the primary citizens and exist in different forms, which can be explained with an analogy to the state of matter: as a gas for efficient storage, as a solid for efficient retrieval, and as a liquid for efficient interaction with the business users. Cell stores were abstracted from, and are compatible with the XBRL standard for importing and exporting data. The first cell store repository contains roughly 200GB of SEC filings data, and proves that retrieving data cubes can be performed in real time (the threshold acceptable by a human user being at most a few seconds).
Introduction
In 1970, Codd <|cite_start|> (Reference: {A Relational Model of Data for Large Shared Data Banks: Future users of large data banks must be protected from having to know how the data is organized in the machine (the internal representation). A prompting service which supplies such information is not a satisfactory solution. Activities of users at terminals and most application programs should remain unaffected when the internal representation of data is changed and even when some aspects of the external representation are changed. Changes in data representation will often be needed as a result of changes in query, update, and report traffic and natural growth in the types of stored information.
Existing noninferential, formatted data systems provide users with tree-structured files or slightly more general network models of the data. In Section 1, inadequacies of these models are discussed. A model based on n-ary relations, a normal form for data base relations, and the concept of a universal data sublanguage are introduced. In Section 2, certain operations on relations (other than logical inference) are discussed and applied to the problems of redundancy and consistency in the user's model.) <|cite_end|> introduced the relational model as an alternative to the graph and network models (such as file systems) in order to provide a more suitable interface to users, and to protect them from internal representations (``data independence'').
The relational model's first implementation was made public in 1976 by IBM <|cite_start|> (Reference: {System R: Relational Approach to Database Management: System R is a database management system which provides a high level relational data interface. The systems provides a high level of data independence by isolating the end user as much as possible from underlying storage structures. The system permits definition of a variety of relational views on common underlying data. Data control features are provided, including authorization, integrity assertions, triggered transactions, a logging and recovery subsystem, and facilities for maintaining data consistency in a shared-update environment.
This paper contains a description of the overall architecture and design of the system. At the present time the system is being implemented and the design evaluated. We emphasize that System R is a vehicle for research in database architecture, and is not planned as a product.) <|cite_end|>.
In the last four decades, the relational model has been enjoying undisputed popularity and has been widely used in enterprise environments. This is probably because it is both very simple to understand and universal. Furthermore, it is accessible to business users without IT knowledge, to whom tabular structures are very natural --- as demonstrated by the strong usage of spreadsheet software <|cite_start|> (Reference: Budgeting Models and System Simulation: A Dynamic Approach: In this paper the main idea is to create a flexible Budgeting Model which works in a dynamic framework capable of providing information about the financial position, income, assets and liabilities of an enterprise. This new tool should be considered the starting point for a new class of models able to adapt to the new informative requirements that come from different economic subjects. In order to create a similar model one has to analyze the procedures of double entry bookkeeping and find a mathematical formalization of them. Then, using the typical relationships presented in the balance sheet, it will be possible to build an initial quantitative model which uses accounting data to represent the dynamics of the company and to create a simulation associated with those dynamics. In succession we are showing how our accounting recordings formalization, based on the concept of difference equations, can be used to model the entire dynamic of the Budgeting Model and more in general the dynamic of a Balance Sheet. Afterwards a closed form solution will be proposed. This solution gives us information about the financial position, income, assets and liabilities of an enterprise after n-periods and it could be used both to simulate firm budget and for other financial purposes. However, this research wants also to introduce a new series of quantitative instruments to integrate the set of information at the disposal of companies, obtained, to begin with, from its accounting records.) <|cite_end|> (such as Microsoft Excel, Apple Numbers, Lotus 1-2-3, OpenOffice Calc) as well as user-friendly front-ends (such as Microsoft Access).
However, in the years 2000s, the exponential explosion of the quantity of data to deal with increasingly showed the limitations of this model. Several companies, such as Google, Facebook or Twitter needed scaling up and out beyond the capabilities of any RDBMS, both because of the \emph{quantity} of data (rows), and because of the \emph{high dimensionality} of this data (columns). Each of them built their own, ad-hoc data management system (Big Table <|cite_start|> (Reference: Bigtable: A Distributed Storage System For Structured Data: Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this paper we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable.) <|cite_end|>, Cassandra <|cite_start|> (Reference: Cassandra: A Decentralized Structured Storage System: Cassandra is a distributed storage system for managing very large amounts of structured data spread out across many commodity servers, while providing highly available service with no single point of failure. Cassandra aims to run on top of an infrastructure of hundreds of nodes (possibly spread across different data centers). At this scale, small and large components fail continuously. The way Cassandra manages the persistent state in the face of these failures drives the reliability and scalability of the software systems relying on this service. While in many ways Cassandra resembles a database and shares many design and implementation strategies therewith, Cassandra does not support a full relational data model; instead, it provides clients with a simple data model that supports dynamic control over data layout and format. Cassandra system was designed to run on cheap commodity hardware and handle high write throughput while not sacrificing read efficiency.) <|cite_end|>, ...). These technologies often share the same design ideas (scale out through clustering and replication, high dimensionality handling through data heterogeneity and tree structures), which led to the popular common denomination of NoSQL, a common roof for:
\begin{description}
\item[Key-value stores,] which store big collections of key-value pairs. Example: DynamoDB.
\item[Document stores,] which are document-oriented, typically supporting XML or JSON <|cite_start|> (Reference: JSON - JavaScript Object Notation: ) <|cite_end|>. Example: MongoDB.
\item[Column stores,] which keep the table abstraction while allowing some sparseness. Example: Cassandra.
\item[Graph databases,] which work at the lower triple level. Example: Neo4j.
\end{description}
NoSQL solves the scale-up issue, but at a two-fold cost:
\begin{description}
\item[For developers,] the level of abstraction provided by NoSQL stores is much lower than that of the relational model. These data stores often provide limited querying capability such as point or range queries, insert, delete and update (CRUD). Higher-level operations such as joins must be implemented in a host language, on a case by case basis.
\item[For business users,] these data models are much less natural than tabular data. Reading and editing data formats such as XML, JSON requires at least basic IT knowledge. Furthermore, business users should not have to deal with indexes at all.
\end{description}
This is a major step back from Codd's intentions back in the 1970s, as the very representations he wanted to protect users from (tree-like data structures, storage, ...) are pushed back to the user.
Reluctance can be observed amongst non-technical users, and this might explain why the ``big three'' (Oracle, Microsoft, IBM) are heavily pushing towards using of the SQL language <|cite_start|> (Reference: Sequel: A structured english query language: In this paper we present the data manipulation facility for a structured English query language (SEQUEL) which can be used for accessing data in an integrated relational data base. Without resorting to the concepts of bound variables and quantifiers SEQUEL identifies a set of simple operations on tabular structures, which can be shown to be of equivalent power to the first order predicate calculus. A SEQUEL user is presented with a consistent set of keyword English templates which reflect how people use tables to obtain information. Moreover, the SEQUEL user is able to compose these basic templates in a structured manner in order to form more complex queries. SEQUEL is intended as a data base sublanguage for both the professional programmer and the more infrequent data base user.) <|cite_end|> on top of these data stores.
This paper introduces the cell store data paradigm, whose goal is to (i) leverage the technological advancements made in the last decade, while (ii) bringing back to business users control and understanding over their data. The cell store paradigm was vastly inspired by and abstracted from the XBRL standard <|cite_start|> (Reference: XBRL: У статті досліджено сучасний стан впровадження IT у систему бухгалтерського обліку, а са-ме: у процес формування та подання фінансової звітності. Цифрова фінансова звітність змінить практику ведення бухгалтерського обліку найближчими роками, а зміни полягатимуть у технологіях представлення да-них. У 2021 році усі підприємства в Україні, що подають фінансову звітність за МСФЗ, повинні подавати фінансову звітність у форматі таксономії XBRL. У статті розкрито технологію подання фінансової звіт-ності у форматі іXBRL, яка є сучасним та зрозумілим стандартом обміну фінансовою інформацією між різ-ними зацікавленими користувачами з будь-якій країні світу. Розглянуто сучасний стан, проблеми та перспек-тиви впровадження нового цифрового стандарту подання фінансової звітності – iXBRL в Україні. Про-аналізовано сучасне програмне забезпечення щодо формування та подання фінансової звітності у форматі iXBRL на портал FRS. Визначено основні переваги та недоліки використання різних способів формування та подання фінансової звітності у форматі iXBRL. Надано рекомендації щодо процесу впровадження системи подання фінансової звітності у форматі iXBRL. Зроблено висновок, що на сучасному етапі розвитку інфор-маційних технологій в Україні кожна компанія може обирати свій шлях переходу на XBRL формат подання фінансової звітності. Проте, перед тим, як прийняти рішення щодо переходу на звітування в XBRL, необхідно: провести аналіз та перевірку власних ІТ-систем на предмет можливості формувати фінансову звітність у форматі XBRL; провести мапінг або зіставлення звітної інформації з власної ІТ-системи та розширеної так-сономії UA XBRL МСФЗ; провести валідацію XBRL, використавши уже існуюче програмне забезпечення (до прикладу, Arelle) чи на власному внутрішньому програмному забезпеченні компанії.) <|cite_end|>, which defines a serialization format for exchanging facts. Historically, cell stores were precisely designed in order to efficiently store and retrieve XBRL data. With time, this paradigm was decoupled from XBRL in such a way that it could also accommodate for data beyond business reporting. In particular, relational data can also be dropped into a cell store.
Cell stores are at a sweet spot between on the one hand key-value stores, in that they scale up seamlessly and gracefully in the quantity of data as well as the dimensionality of the data, and on the other hand the relational model, in that business users access the data in tabular views via familiar, spreadsheet-like interfaces.
Section \ref{section-state-of-the-art} gives an overview of state of the art technologies for storing large quantities of highly dimensional data and their shortcomings. Section \ref{section-why} motivates the need for the cell stores paradigm. Section \ref{section-data-model} introduces the data model behind cell stores. Section \ref{section-relational-mapping} shows how a relational database can be stored naturally in a cell store. Section \ref{section-xbrl-standard} points out that there is a standard format, XBRL, for exchanging data between cell stores as well as other databases. Section \ref{section-implementation} gives implementation-level details. Section \ref{section-performance} explores performance. <|paper_end|> | [
"<|reference_start|> {System R: Relational Approach to Database Management: System R is a database management system which provides a high level relational data interface. The systems provides a high level of data independence by isolating the end user as much as possible from underlying storage structures. The system permits definition of a variety of relational views on common underlying data. Data control features are provided, including authorization, integrity assertions, triggered transactions, a logging and recovery subsystem, and facilities for maintaining data consistency in a shared-update environment.\nThis paper contains a description of the overall architecture and design of the system. At the present time the system is being implemented and the design evaluated. We emphasize that System R is a vehicle for research in database architecture, and is not planned as a product. <|reference_end|>",
"<|reference_start|> Bigtable: A Distributed Storage System For Structured Data: Bigtable is a distributed storage system for managing structured data that is designed to scale to a very large size: petabytes of data across thousands of commodity servers. Many projects at Google store data in Bigtable, including web indexing, Google Earth, and Google Finance. These applications place very different demands on Bigtable, both in terms of data size (from URLs to web pages to satellite imagery) and latency requirements (from backend bulk processing to real-time data serving). Despite these varied demands, Bigtable has successfully provided a flexible, high-performance solution for all of these Google products. In this paper we describe the simple data model provided by Bigtable, which gives clients dynamic control over data layout and format, and we describe the design and implementation of Bigtable. <|reference_end|>",
"<|reference_start|> Cassandra: A Decentralized Structured Storage System: Cassandra is a distributed storage system for managing very large amounts of structured data spread out across many commodity servers, while providing highly available service with no single point of failure. Cassandra aims to run on top of an infrastructure of hundreds of nodes (possibly spread across different data centers). At this scale, small and large components fail continuously. The way Cassandra manages the persistent state in the face of these failures drives the reliability and scalability of the software systems relying on this service. While in many ways Cassandra resembles a database and shares many design and implementation strategies therewith, Cassandra does not support a full relational data model; instead, it provides clients with a simple data model that supports dynamic control over data layout and format. Cassandra system was designed to run on cheap commodity hardware and handle high write throughput while not sacrificing read efficiency. <|reference_end|>",
"<|reference_start|> XBRL: У статті досліджено сучасний стан впровадження IT у систему бухгалтерського обліку, а са-ме: у процес формування та подання фінансової звітності. Цифрова фінансова звітність змінить практику ведення бухгалтерського обліку найближчими роками, а зміни полягатимуть у технологіях представлення да-них. У 2021 році усі підприємства в Україні, що подають фінансову звітність за МСФЗ, повинні подавати фінансову звітність у форматі таксономії XBRL. У статті розкрито технологію подання фінансової звіт-ності у форматі іXBRL, яка є сучасним та зрозумілим стандартом обміну фінансовою інформацією між різ-ними зацікавленими користувачами з будь-якій країні світу. Розглянуто сучасний стан, проблеми та перспек-тиви впровадження нового цифрового стандарту подання фінансової звітності – iXBRL в Україні. Про-аналізовано сучасне програмне забезпечення щодо формування та подання фінансової звітності у форматі iXBRL на портал FRS. Визначено основні переваги та недоліки використання різних способів формування та подання фінансової звітності у форматі iXBRL. Надано рекомендації щодо процесу впровадження системи подання фінансової звітності у форматі iXBRL. Зроблено висновок, що на сучасному етапі розвитку інфор-маційних технологій в Україні кожна компанія може обирати свій шлях переходу на XBRL формат подання фінансової звітності. Проте, перед тим, як прийняти рішення щодо переходу на звітування в XBRL, необхідно: провести аналіз та перевірку власних ІТ-систем на предмет можливості формувати фінансову звітність у форматі XBRL; провести мапінг або зіставлення звітної інформації з власної ІТ-системи та розширеної так-сономії UA XBRL МСФЗ; провести валідацію XBRL, використавши уже існуюче програмне забезпечення (до прикладу, Arelle) чи на власному внутрішньому програмному забезпеченні компанії. <|reference_end|>"
] | [
1,
3,
4,
7
] | {"<|cite_1|>": "ss-968335", "<|cite_2|>": "ss-720824", "<|cite_4|>": "ss-1789366", "<|cite_5|>": "ss-1073058", "<|cite_6|>": "ss-1073057", "<|cite_8|>": "ss-2537231", "<|cite_9|>": "ss-1439883", "<|cite_10|>": "ss-1789367"} |
1905.08723 | <|paper_start|> Title: A comparison of evaluation methods in coevolution
Abstract: A comparison of evaluation methods in coevolution: In this research, we compare four different evaluation methods in coevolution on the Majority Function problem. The size of the problem is selected such that evaluation against all possible test cases is feasible. Two measures are used for the comparisons, i.e., the objective fitness derived from evaluating solutions against all test cases, and the objective fitness correlation (OFC), which is defined as the correlation coefficient between subjective and objective fitness. The results of our experiments suggest that a combination of average score and weighted informativeness may provide a more accurate evaluation in coevolution. In order to confirm this difference, a series of t-tests on the preference between each pair of the evaluation methods is performed. The resulting significance is affirmative, and the tests for two quality measures show similar preference on four evaluation methods. %This study is the first time OFC is actually computed on a real problem. Experiments on Majority Function problems with larger sizes and Parity problems are in progress, and their results will be added in the final version.
Introduction
Coevolution offers an approach to adaptively select tests for the evaluation of learners <|cite_start|> (Reference: Co-evolving parasites improve simulated evolution as an optimization procedure: ) <|cite_end|> <|cite_start|> (Reference: New methods for competitive coevolution: We consider competitive coevolution, in which fitness is based on direct competition among individuals selected from two independently evolving populations of hosts and parasites. Competitive coevolution can lead to an arms race, in which the two populations reciprocally drive one another to increasing levels of performance and complexity. We use the games of Nim and 3-D Tic-Tac-Toe as test problems to explore three new techniques in competitive coevolution. Competitive fitness sharing changes the way fitness is measured; shared sampling provides a method for selecting a strong, diverse set of parasites; and the hall of fame encourages arms races by saving good individuals from prior generations. We provide several different motivations for these methods and mathematical insights into their use. Experimental comparisons are done, and a detailed analysis of these experiments is presented in terms of testing issues, diversity, extinction, arms race progress measurements, and drift.) <|cite_end|> <|cite_start|> (Reference: Coevolving the "ideal" trainer: Application to the discovery of cellular automata rules: Coevolution provides a framework to implement search heuristics that are more elaborate than those driving the exploration of the state space in canonical evolutionary systems. However, some drawbacks have also to be overcome in order to ensure continuous progress on the long term. This paper presents the concept of coevolutionary learning and introduces a search procedure which successfully addresses the underlying impediments in coevolutionary search. The application of this algorithm to the discovery of cellular automata rules for a classi cation task is described. This work resulted in a signi cant improvement over previously known best rules for this task.) <|cite_end|> <|cite_start|> (Reference: Coevolutionary computation: This article proposes a general framework for the use of coevolution to boost the performance of genetic search. It combines coevolution with yet another biologically inspired technique, called lifetime fitness evaluation (LTFE). Two unrelated problems—neural net learning and constraint satisfaction—are used to illustrate the approach. Both problems use predator-prey interactions to boost the search. In contrast with traditional single population genetic algorithms (GAs), two populations constantly interact and coevolve. However, the same algorithm can also be used with different types of coevolutionary interactions. As an example, the symbiotic coevolution of solutions and genetic representations is shown to provide an elegant solution to the problem of finding a suitable genetic representation. The approach presented here greatly profits from the partial and continuous nature of LTFE. Noise tolerance is one advantage. Even more important, LTFE is ideally suited to deal with coupled fitness landscapes typical for coevolution.) <|cite_end|> <|cite_start|> (Reference: Evolutionary consequences of coevolving targets: Most evolutionary optimization models incorporate a fitness evaluation that is based on a predefined static set of test cases or problems. In the natural evolutionary process, selection is of course not based on a static fitness evaluation. Organisms do not have to combat every existing disease during their lifespan; organisms of one species may live in different or changing environments; different species coevolve. This leads to the question of how information is integrated over many generations. This study focuses on the effects of different fitness evaluation schemes on the types of genotypes and phenotypes that evolve. The evolutionary target is a simple numerical function. The genetic representation is in the form of a program (i.e., a functional representation, as in genetic programming). Many different programs can code for the same numerical function. In other words, there is a many-to-one mapping between genotypes (the programs) and phenotypes. We compare fitness evaluation based on a large static set of problems and fitness evaluation based on small coevolving sets of problems. In the latter model very little information is presented to the evolving programs regarding the evolutionary target per evolutionary time step. In other words, the fitness evaluation is very sparse. Nevertheless the model produces correct solutions to the complete evolutionary target in about half of the simulations. The complete evaluation model, on the other hand, does not find correct solutions to the target in any of the simulations. More important, we find that sparse evaluated programs are better generalizable compared to the complete evaluated programs when they are evaluated on a much denser set of problems. In addition, the two evaluation schemes lead to programs that differ with respect to mutational stability; sparse evaluated programs are less stable than complete evaluated programs.) <|cite_end|> <|cite_start|> (Reference: Ideal evaluation from coevolution: In many problems of interest, performance can be evaluated using tests, such as examples in concept learning, test points in function approximation, and opponents in game-playing. Evaluation on all tests is often infeasible. Identification of an accurate evaluation or fitness function is a difficult problem in itself, and approximations are likely to introduce human biases into the search process. Coevolution evolves the set of tests used for evaluation, but has so far often led to inaccurate evaluation. We show that for any set of learners, a Complete Evaluation Set can be determined that provides ideal evaluation as specified by Evolutionary Multi-Objective Optimization. This provides a principled approach to evaluation in coevolution, and thereby brings automatic ideal evaluation within reach. The Complete Evaluation Set is of manageable size, and progress towards it can be accurately measured. Based on this observation, an algorithm named DELPHI is developed. The algorithm is tested on problems likely to permit progress on only a subset of the underlying objectives. Where all comparison methods result in overspecialization, the proposed method and a variant achieve sustained progress in all underlying objectives. These findings demonstrate that ideal evaluation may be approximated by practical algorithms, and that accurate evaluation for test-based problems is possible even when the underlying objectives of a problem are unknown.) <|cite_end|>. Using coevolution, the evaluation function is adapted as part of the evolutionary process. This approach can be useful if the quality of individuals can be assessed using some form of {\em tests}. For such {\em test-based problems}, the identification of an informative set of tests can reduce the amount of required computation, while potentially providing more useful information than any static selection of tests. Since an adaptive test set can render evaluation unstable, an important question is how coevolution can be set up to be sufficiently reliable.
A recent insight in coevolution research is that the design of a coevolutionary setup should begin with a consideration of the desired {\em solution concept} <|cite_start|> (Reference: Solution concepts in coevolutionary algorithms: Inspired by the principle of natural selection, coevolutionary algorithms are search methods in which processes of mutual adaptation occur amongst agents that interact strategically. The outcomes of interaction reveal a reward structure that guides evolution towards the discovery of increasingly adaptive behaviors. Thus, coevolutionary algorithms are often used to search for optimal agent behaviors in domains of strategic interaction.
Coevolutionary algorithms require little a priori knowledge about the domain. We assume the learning task necessitates the algorithm to (1) discover agent behaviors, (2) learn the domain's reward structure, and (3) approximate an optimal solution. Despite the many successes of coevolutionary optimization, the practitioner frequently observes a gap between the properties that actually confer agent adaptivity and those expected (or desired) to yield adaptivity, or optimality. This gap is manifested by a variety of well-known pathologies, such as cyclic dynamics, loss of fitness gradient, and evolutionary forgetting.
This dissertation examines the divergence between expectation and actuality in co-evolutionary algorithms—why selection pressures fail to conform to our beliefs about adaptiveness, or why our beliefs are evidently erroneous. When we confront the pathologies of coevolutionary algorithms as a collection, we find that they are essentially epiphenomena of a single fundamental problem, namely a lack of rigor in our solution concepts .
A solution concept is a formalism with which to describe and understand the incentive structures of agents that interact strategically. All coevolutionary algorithms implement some solution concept, whether by design or by accident, and optimize according to it. Failures to obtain the desiderata of “complexity” or “optimality” often indicate a dissonance between the implemented solution concept and that required by our envisaged goal.
We make the following contributions: (1) We show that solution concepts are the critical link between our expectations of coevolution and the outcomes actually delivered by algorithm operation, and are therefore crucial to explicating the divergence between the two, (2) We provide analytic results that show how solution concepts bring our expectations in line with algorithmic reality, and (3) We show how solution concepts empower us to construct algorithms that operate more in line with our goals.) <|cite_end|>. A solution concept specifies which elements of the search space qualify as solutions and which do not. Examples of solution concepts include: Maximum Expected Utility (maximizing the expected outcome against a randomly selected opponent, which for uniform selection is equivalent to maximizing the average outcome against all opponents), the Pareto-optimal set resulting from viewing each test as a separate objective, and Nash-equilibria, in which no candidate solution or test can unilaterally deviate given the other candidate solutions and tests without decreasing its payoff.
For several of the main solution concepts used in coevolution, archive methods exist guaranteeing that when sufficiently diverse sets of new individuals are submitted to the archive, the archive will produce monotonically improving approximations of the solution concept. Recent examples of such archive methods are the Nash Memory <|cite_start|> (Reference: Solution concepts in coevolutionary algorithms: Inspired by the principle of natural selection, coevolutionary algorithms are search methods in which processes of mutual adaptation occur amongst agents that interact strategically. The outcomes of interaction reveal a reward structure that guides evolution towards the discovery of increasingly adaptive behaviors. Thus, coevolutionary algorithms are often used to search for optimal agent behaviors in domains of strategic interaction.
Coevolutionary algorithms require little a priori knowledge about the domain. We assume the learning task necessitates the algorithm to (1) discover agent behaviors, (2) learn the domain's reward structure, and (3) approximate an optimal solution. Despite the many successes of coevolutionary optimization, the practitioner frequently observes a gap between the properties that actually confer agent adaptivity and those expected (or desired) to yield adaptivity, or optimality. This gap is manifested by a variety of well-known pathologies, such as cyclic dynamics, loss of fitness gradient, and evolutionary forgetting.
This dissertation examines the divergence between expectation and actuality in co-evolutionary algorithms—why selection pressures fail to conform to our beliefs about adaptiveness, or why our beliefs are evidently erroneous. When we confront the pathologies of coevolutionary algorithms as a collection, we find that they are essentially epiphenomena of a single fundamental problem, namely a lack of rigor in our solution concepts .
A solution concept is a formalism with which to describe and understand the incentive structures of agents that interact strategically. All coevolutionary algorithms implement some solution concept, whether by design or by accident, and optimize according to it. Failures to obtain the desiderata of “complexity” or “optimality” often indicate a dissonance between the implemented solution concept and that required by our envisaged goal.
We make the following contributions: (1) We show that solution concepts are the critical link between our expectations of coevolution and the outcomes actually delivered by algorithm operation, and are therefore crucial to explicating the divergence between the two, (2) We provide analytic results that show how solution concepts bring our expectations in line with algorithmic reality, and (3) We show how solution concepts empower us to construct algorithms that operate more in line with our goals.) <|cite_end|> <|cite_start|> (Reference: A Game-Theoretic Memory Mechanism for Coevolution: ) <|cite_end|>, which guarantees monotonicity for the Nash equilibrium solution concept; the IPCA algorithm, which guarantees monotonicity for the Pareto-optimal equivalence set <|cite_start|> (Reference: A monotonic archive for pareto-coevolution: Coevolution has already produced promising results, but its dynamic evaluation can lead to a variety of problems that prevent most algorithms from progressing monotonically. An important open question therefore is how progress towards a chosen solution concept can be achieved. A general solution concept for coevolution is obtained by viewing opponents or tests as objectives. In this setup known as Pareto-coevolution, the desired solution is the Pareto-optimal set. We present an archive that guarantees monotonicity for this solution concept. The algorithm is called the Incremental Pareto-Coevolution Archive (IPCA), and is based on Evolutionary Multi-Objective Optimization (EMOO). By virtue of its monotonicity, IPCA avoids regress even when combined with a highly explorative generator. This capacity is demonstrated on a challenging test problem requiring both exploration and reliability. IPCA maintains a highly specific selection of tests, but the size of the test archive nonetheless grows unboundedly. We therefore furthermore investigate how archive sizes may be limited while still providing approximate reliability. The LAyered Pareto-Coevolution Archive (LAPCA) maintains a limited number of layers of candidate solutions and tests, and thereby permits a trade-off between archive size and reliability. The algorithm is compared in experiments, and found to be more efficient than IPCA. The work demonstrates how the approximation of a monotonic algorithm can lead to algorithms that are sufficiently reliable in practice while offering better efficiency.) <|cite_end|>; and the MaxSolve algorithm <|cite_start|> (Reference: The MaxSolve algorithm for coevolution: Coevolution can be used to adaptively choose the tests used for evaluating candidate solutions. A long-standing question is how this dynamic setup may be organized to yield reliable search methods. Reliability can only be considered in connection with a particular solution concept specifying what constitutes a solution. Recently, monotonic coevolution algorithms have been proposed for several solution concepts. Here, we introduce a new algorithm that guarantees monotonicity for the solution concept of maximizing the expected utility of a candidate solution. The method, called MaxSolve, is compared to the IPCA algorithm and found to perform more efficiently for a range of parameter values on an abstract test problem.) <|cite_end|>, which guarantees monotonicity for the Maximum Expected Utility solution concept.
While theoretical guarantees of monotonic progress are important, so far no bounds or guarantees regarding the improvement of the approximation to the solution concept over time are available. Thus, approximating the solution concept to a desired degree of accuracy may take an infeasible amount of time. An important current practical question is therefore: how can coevolutionary algorithms be set up such that their dynamics lead to quick improvement over time? By using such efficient algorithms as generators of new individuals and coupling them to monotic archives, thereby combining the guarantee of monotonic progress with efficiency, a principled approach to designing robust and efficient coevolution algorithms is obtained.
Efficiency in coevolutionary algorithms depends on selection, see e.g. <|cite_start|> (Reference: A game-theoretic and dynamical-systems analysis of selection methods in coevolution: We use evolutionary game theory (EGT) to investigate the dynamics and equilibria of selection methods in coevolutionary algorithms. The canonical selection method used in EGT is equivalent to the standard "fitness-proportional" selection method used in evolutionary algorithms. All attractors of the EGT dynamic are Nash equilibria; we focus on simple symmetric variable-sum games that have polymorphic Nash-equilibrium attractors. Against the dynamics of proportional selection, we contrast the behaviors of truncation selection, (/spl mu/,/spl lambda/),(/spl mu/+/spl lambda/), linear ranking, Boltzmann, and tournament selection. Except for Boltzmann selection, each of the methods we test unconditionally fail to achieve polymorphic Nash equilibrium. Instead, we find point attractors that lack game-theoretic justification, cyclic dynamics, or chaos. Boltzmann selection converges onto polymorphic Nash equilibrium only when selection pressure is sufficiently low; otherwise, we obtain attracting limit-cycles or chaos. Coevolutionary algorithms are often used to search for solutions (e.g., Nash equilibria) of games of strategy; our results show that many selection methods are inappropriate for finding polymorphic Nash solutions to variable-sum games. Another application of coevolution is to model other systems; our results emphasize the degree to which the model's behavior is sensitive to implementation details regarding selection-details that we might not otherwise believe to be critical.) <|cite_end|>, and evaluation. In this research we focus on evaluation. Our aim is to compare the efficiency, reflected in the improvement over time of an objective quality measure, that can be achieved using different coevolutionary evaluation methods. Since a main question is how sufficiently accurate evaluation may be achieved, the testing environment is chosen such that evaluating individuals on all tests is feasible; while this is not the case in practical applications of coevolution, this provides a possibility to compare evaluation methods with the maximally informative situation in which information about all possible tests is available. This setup permits investigating two important questions:
\begin{enumerate}
\item Given all information that may be relevant to evaluation, how can this information be used optimally?
\item Compared to evaluation based on all relevant information, how do different coevolutionary evaluation methods perform?
\end{enumerate}
In this paper, we focus on the second question. Four different coevolutionary evaluation methods are compared to each other and to the baseline of testing against all tests. The test problem is a small variant of the Majority Function test problem <|cite_start|> (Reference: Revisiting the edge of chaos: evolving cellular automata to perform computations: Author(s): Mitchell, Melanie; Hraber, Peter; Crutchfield, James P | Abstract: We present results from an experiment similar to one performed by Packard (1988), in which a genetic algorithm is used to evolve cellular automata (CA) to perform a particular computational task. Packard examined the frequency of evolved CA rules as a function of Langton's lambda parameter (Langton, 1990), and interpreted the results of his experiment as giving evidence for the following two hypotheses: (1) CA rules able to perform complex computations are most likely to be found near ``critical'' lambda values, which have been claimed to correlate with a phase transition between ordered and chaotic behavioral regimes for CA; (2) When CA rules are evolved to perform a complex computation, evolution will tend to select rules with lambda values close to the critical values. Our experiment produced very different results, and we suggest that the interpretation of the original results is not correct. We also review and discuss issues related to lambda, dynamical-behavior classes, and computation in CA. The main constructive results of our study are identifying the emergence and competition of computational strategies and analyzing the central role of symmetries in an evolutionary system. In particular, we demonstrate how symmetry breaking can impede the evolution toward higher computational capability.) <|cite_end|> <|cite_start|> (Reference: Evolving cellular automata to perform computations: mechanisms and impediments: ) <|cite_end|> <|cite_start|> (Reference: Coevolving the "ideal" trainer: Application to the discovery of cellular automata rules: Coevolution provides a framework to implement search heuristics that are more elaborate than those driving the exploration of the state space in canonical evolutionary systems. However, some drawbacks have also to be overcome in order to ensure continuous progress on the long term. This paper presents the concept of coevolutionary learning and introduces a search procedure which successfully addresses the underlying impediments in coevolutionary search. The application of this algorithm to the discovery of cellular automata rules for a classi cation task is described. This work resulted in a signi cant improvement over previously known best rules for this task.) <|cite_end|> chosen such that evaluation against all test cases (initial conditions) is feasible. A new tool named the Objective Fitness Correlation (OFC) <|cite_start|> (Reference: Objective fitness correlation: evaluating coevolutionary evaluation: This paper introduces the Objective Fitness Correlation, a new tool to analyze the evaluation accuracy of coevolutionary algorithms. Accurate evaluation is an essential ingredient in creating adequate coevolutionary dynamics. Based on the notion of a solution concept, a new definition for objective fitness in coevolution is provided. The correlation between the objective fitness and the subjective fitness used in a coevolutionary algorithm yields the Objective Fitness Correlation. The OFC measure is applied to three coevolutionary evaluation methods. It is found that the Objective Fitness Correlation varies substantially over time. Moreover, a high OFC is found to correspond to periods where the algorithm is able to increase the objective quality of individuals. This is evidence of the utility of OFC as a measure to evaluate and compare coevolutionary evaluation mechanisms. The Objective Fitness Correlation (OFC) provides a precise analytical tool to measure the accuracy of evaluation in coevolutionary algorithms.) <|cite_end|>, the correlation between the subjective and the objective fitness measures, is used to assess the evaluation accuracy of the different methods.
The paper is structured as follows. In section 2 we discuss the evaluation methods and algorithms used in this research. The design of experiments, parameters and performance measures are described in section 3. The results are presented in section 4, and the discussions and concluding remarks are shown in section 5. <|paper_end|> | [
"<|reference_start|> Co-evolving parasites improve simulated evolution as an optimization procedure: <|reference_end|>",
"<|reference_start|> Coevolutionary computation: This article proposes a general framework for the use of coevolution to boost the performance of genetic search. It combines coevolution with yet another biologically inspired technique, called lifetime fitness evaluation (LTFE). Two unrelated problems—neural net learning and constraint satisfaction—are used to illustrate the approach. Both problems use predator-prey interactions to boost the search. In contrast with traditional single population genetic algorithms (GAs), two populations constantly interact and coevolve. However, the same algorithm can also be used with different types of coevolutionary interactions. As an example, the symbiotic coevolution of solutions and genetic representations is shown to provide an elegant solution to the problem of finding a suitable genetic representation. The approach presented here greatly profits from the partial and continuous nature of LTFE. Noise tolerance is one advantage. Even more important, LTFE is ideally suited to deal with coupled fitness landscapes typical for coevolution. <|reference_end|>",
"<|reference_start|> Solution concepts in coevolutionary algorithms: Inspired by the principle of natural selection, coevolutionary algorithms are search methods in which processes of mutual adaptation occur amongst agents that interact strategically. The outcomes of interaction reveal a reward structure that guides evolution towards the discovery of increasingly adaptive behaviors. Thus, coevolutionary algorithms are often used to search for optimal agent behaviors in domains of strategic interaction. \nCoevolutionary algorithms require little a priori knowledge about the domain. We assume the learning task necessitates the algorithm to (1) discover agent behaviors, (2) learn the domain's reward structure, and (3) approximate an optimal solution. Despite the many successes of coevolutionary optimization, the practitioner frequently observes a gap between the properties that actually confer agent adaptivity and those expected (or desired) to yield adaptivity, or optimality. This gap is manifested by a variety of well-known pathologies, such as cyclic dynamics, loss of fitness gradient, and evolutionary forgetting. \nThis dissertation examines the divergence between expectation and actuality in co-evolutionary algorithms—why selection pressures fail to conform to our beliefs about adaptiveness, or why our beliefs are evidently erroneous. When we confront the pathologies of coevolutionary algorithms as a collection, we find that they are essentially epiphenomena of a single fundamental problem, namely a lack of rigor in our solution concepts . \nA solution concept is a formalism with which to describe and understand the incentive structures of agents that interact strategically. All coevolutionary algorithms implement some solution concept, whether by design or by accident, and optimize according to it. Failures to obtain the desiderata of “complexity” or “optimality” often indicate a dissonance between the implemented solution concept and that required by our envisaged goal. \nWe make the following contributions: (1) We show that solution concepts are the critical link between our expectations of coevolution and the outcomes actually delivered by algorithm operation, and are therefore crucial to explicating the divergence between the two, (2) We provide analytic results that show how solution concepts bring our expectations in line with algorithmic reality, and (3) We show how solution concepts empower us to construct algorithms that operate more in line with our goals. <|reference_end|>",
"<|reference_start|> Revisiting the edge of chaos: evolving cellular automata to perform computations: Author(s): Mitchell, Melanie; Hraber, Peter; Crutchfield, James P | Abstract: We present results from an experiment similar to one performed by Packard (1988), in which a genetic algorithm is used to evolve cellular automata (CA) to perform a particular computational task. Packard examined the frequency of evolved CA rules as a function of Langton's lambda parameter (Langton, 1990), and interpreted the results of his experiment as giving evidence for the following two hypotheses: (1) CA rules able to perform complex computations are most likely to be found near ``critical'' lambda values, which have been claimed to correlate with a phase transition between ordered and chaotic behavioral regimes for CA; (2) When CA rules are evolved to perform a complex computation, evolution will tend to select rules with lambda values close to the critical values. Our experiment produced very different results, and we suggest that the interpretation of the original results is not correct. We also review and discuss issues related to lambda, dynamical-behavior classes, and computation in CA. The main constructive results of our study are identifying the emergence and competition of computational strategies and analyzing the central role of symmetries in an evolutionary system. In particular, we demonstrate how symmetry breaking can impede the evolution toward higher computational capability. <|reference_end|>"
] | [
0,
3,
7,
12
] | {"<|multi_cite_1_1|>": "ss-854432", "<|multi_cite_1_2|>": "ss-2291794", "<|multi_cite_1_3|>": "ss-854433", "<|multi_cite_1_4|>": "ss-1258557", "<|multi_cite_1_5|>": "ss-854434", "<|multi_cite_1_6|>": "ss-854435", "<|cite_2|>": "ss-883094", "<|multi_cite_3_1|>": "ss-883094", "<|multi_cite_3_2|>": "ss-854436", "<|cite_4|>": "ss-854437", "<|cite_5|>": "ss-854438", "<|cite_6|>": "ss-854439", "<|cite_7|>": "ss-2012108", "<|cite_8|>": "ss-1153255", "<|cite_9|>": "ss-854433", "<|cite_10|>": "ss-854440"} |
2308.10053-1 | <|cite_start|> (Reference: Training language models to follow instructions with human feedback: Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent.) <|cite_end|>.
Following these advances, many works successfully deploy large language models to a wide range of downstream tasks such as question answering, numerical reasoning, code generation, and commonsense reasoning without any gradient updates <|cite_start|> (Reference: Codegeex: A pre-trained model for code generation with multilingual evaluations on humaneval-x: ,) <|cite_end|> <|cite_start|> (Reference: In Advances in Neural Information Processing Systems: ) <|cite_end|> <|cite_start|> (Reference: Competition-Level Code Generation with AlphaCode: Programming is a powerful and ubiquitous problem-solving tool. Developing systems that can assist programmers or even generate programs independently could make programming more productive and accessible, yet so far incorporating innovations in AI has proven challenging. Recent large-scale language models have demonstrated an impressive ability to generate code, and are now able to complete simple programming tasks. However, these models still perform poorly when evaluated on more complex, unseen problems that require problem-solving skills beyond simply translating instructions into code. For example, competitive programming problems which require an understanding of algorithms and complex natural language remain extremely challenging. To address this gap, we introduce AlphaCode, a system for code generation that can create novel solutions to these problems that require deeper reasoning. In simulated evaluations on recent programming competitions on the Codeforces platform, AlphaCode achieved on average a ranking of top 54.3% in competitions with more than 5,000 participants. We found that three key components were critical to achieve good and reliable performance: (1) an extensive and clean competitive programming dataset for training and evaluation, (2) large and efficient-to-sample transformer-based architectures, and (3) large-scale model sampling to explore the search space, followed by filtering based on program behavior to a small set of submissions.) <|cite_end|> <|cite_start|> (Reference: Scaling Laws for Neural Language Models: We study empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.) <|cite_end|>. Recently, there have been various attempts by the recommendation community to leverage large language models for recommendation, this includes both adapting architectures used by large language models <|cite_start|> (Reference: Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5): For a long time, different recommendation tasks require designing task-specific architectures and training objectives. As a result, it is hard to transfer the knowledge and representations from one task to another, thus restricting the generalization ability of existing recommendation approaches. To deal with such issues, considering that language can describe almost anything and language grounding is a powerful medium to represent various problems or tasks, we present a flexible and unified text-to-text paradigm called “Pretrain, Personalized Prompt, and Predict Paradigm” (P5) for recommendation, which unifies various recommendation tasks in a shared framework. In P5, all data such as user-item interactions, user descriptions, item metadata, and user reviews are converted to a common format — natural language sequences. The rich information from natural language assists P5 to capture deeper semantics for personalization and recommendation. Specifically, P5 learns different tasks with the same language modeling objective during pretraining. Thus, it serves as the foundation model for various downstream recommendation tasks, allows easy integration with other modalities, and enables instruction-based recommendation. P5 advances recommender systems from shallow model to deep model to big model, and will revolutionize the technical form of recommender systems towards universal recommendation engine. With adaptive personalized prompt for different users, P5 is able to make predictions in a zero-shot or few-shot manner and largely reduces the necessity for extensive fine-tuning. On several benchmarks, we conduct experiments to show the effectiveness of P5. To help advance future research on Recommendation as Language Processing (RLP), Personalized Foundation Models (PFM), and Universal Recommendation Engine (URE), we release the source code, dataset, prompts, and pretrained P5 model at https://github.com/jeykigung/P5.) <|cite_end|> <|cite_start|> (Reference: M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems: Industrial recommender systems have been growing increasingly complex, may involve \emph{diverse domains} such as e-commerce products and user-generated contents, and can comprise \emph{a myriad of tasks} such as retrieval, ranking, explanation generation, and even AI-assisted content production. The mainstream approach so far is to develop individual algorithms for each domain and each task. In this paper, we explore the possibility of developing a unified foundation model to support \emph{open-ended domains and tasks} in an industrial recommender system, which may reduce the demand on downstream settings' data and can minimize the carbon footprint by avoiding training a separate model from scratch for every task. Deriving a unified foundation is challenging due to (i) the potentially unlimited set of downstream domains and tasks, and (ii) the real-world systems' emphasis on computational efficiency. We thus build our foundation upon M6, an existing large-scale industrial pretrained language model similar to GPT-3 and T5, and leverage M6's pretrained ability for sample-efficient downstream adaptation, by representing user behavior data as plain texts and converting the tasks to either language understanding or generation. To deal with a tight hardware budget, we propose an improved version of prompt tuning that outperforms fine-tuning with negligible 1\% task-specific parameters, and employ techniques such as late interaction, early exiting, parameter sharing, and pruning to further reduce the inference time and the model size. We demonstrate the foundation model's versatility on a wide range of tasks such as retrieval, ranking, zero-shot recommendation, explanation generation, personalized content creation, and conversational recommendation, and manage to deploy it on both cloud servers and mobile devices.) <|cite_end|>and repurposing existing LLMs for recommendation <|cite_start|> (Reference: GPT4Rec: A Generative Framework for Personalized Recommendation and User Interests Interpretation: Recent advancements in Natural Language Processing (NLP) have led to the development of NLP-based recommender systems that have shown superior performance. However, current models commonly treat items as mere IDs and adopt discriminative modeling, resulting in limitations of (1) fully leveraging the content information of items and the language modeling capabilities of NLP models; (2) interpreting user interests to improve relevance and diversity; and (3) adapting practical circumstances such as growing item inventories. To address these limitations, we present GPT4Rec, a novel and flexible generative framework inspired by search engines. It first generates hypothetical "search queries" given item titles in a user's history, and then retrieves items for recommendation by searching these queries. The framework overcomes previous limitations by learning both user and item embeddings in the language space. To well-capture user interests with different aspects and granularity for improving relevance and diversity, we propose a multi-query generation technique with beam search. The generated queries naturally serve as interpretable representations of user interests and can be searched to recommend cold-start items. With GPT-2 language model and BM25 search engine, our framework outperforms state-of-the-art methods by $75.7\%$ and $22.2\%$ in Recall@K on two public datasets. Experiments further revealed that multi-query generation with beam search improves both the diversity of retrieved items and the coverage of a user's multi-interests. The adaptiveness and interpretability of generated queries are discussed with qualitative case studies.) <|cite_end|> <|cite_start|> (Reference: Generative Recommendation: Towards Next-generation Recommender Paradigm: Recommender systems typically retrieve items from an item corpus for personalized recommendations. However, such a retrieval-based recommender paradigm faces two limitations: 1) the human-generated items in the corpus might fail to satisfy the users' diverse information needs, and 2) users usually adjust the recommendations via inefficient passive feedback, e.g., clicks. Nowadays, AI-Generated Content (AIGC) has revealed significant success, offering the potential to overcome these limitations: 1) generative AI can produce personalized items to satisfy users' information needs, and 2) the newly emerged large language models significantly reduce the efforts of users to precisely express information needs via natural language instructions. In this light, the boom of AIGC points the way towards the next-generation recommender paradigm with two new objectives: 1) generating personalized content through generative AI, and 2) integrating user instructions to guide content generation. To this end, we propose a novel Generative Recommender paradigm named GeneRec, which adopts an AI generator to personalize content generation and leverages user instructions. Specifically, we pre-process users' instructions and traditional feedback via an instructor to output the generation guidance. Given the guidance, we instantiate the AI generator through an AI editor and an AI creator to repurpose existing items and create new items. Eventually, GeneRec can perform content retrieval, repurposing, and creation to satisfy users' information needs. Besides, to ensure the trustworthiness of the generated items, we emphasize various fidelity checks. Moreover, we provide a roadmap to envision future developments of GeneRec and several domain-specific applications of GeneRec with potential research tasks. Lastly, we study the feasibility of implementing AI editor and AI creator on micro-video generation.) <|cite_end|> <|cite_start|> (Reference: Is ChatGPT a Good Recommender? A Preliminary Study: Recommendation systems have witnessed significant advancements and have been widely used over the past decades. However, most traditional recommendation methods are task-specific and therefore lack efficient generalization ability. Recently, the emergence of ChatGPT has significantly advanced NLP tasks by enhancing the capabilities of conversational models. Nonetheless, the application of ChatGPT in the recommendation domain has not been thoroughly investigated. In this paper, we employ ChatGPT as a general-purpose recommendation model to explore its potential for transferring extensive linguistic and world knowledge acquired from large-scale corpora to recommendation scenarios. Specifically, we design a set of prompts and evaluate ChatGPT's performance on five recommendation scenarios. Unlike traditional recommendation methods, we do not fine-tune ChatGPT during the entire evaluation process, relying only on the prompts themselves to convert recommendation tasks into natural language tasks. Further, we explore the use of few-shot prompting to inject interaction information that contains user potential interest to help ChatGPT better understand user needs and interests. Comprehensive experimental results on Amazon Beauty dataset show that ChatGPT has achieved promising results in certain tasks and is capable of reaching the baseline level in others. We conduct human evaluations on two explainability-oriented tasks to more accurately evaluate the quality of contents generated by different models. And the human evaluations show ChatGPT can truly understand the provided information and generate clearer and more reasonable results. We hope that our study can inspire researchers to further explore the potential of language models like ChatGPT to improve recommendation performance and contribute to the advancement of the recommendation systems field.) <|cite_end|>.
However, to our best knowledge, we are the first work that provides a systematic quantitative analysis of LLMs' ability on \textit{conversational} recommendation. <|paper_end|> | [
"<|reference_start|> Training language models to follow instructions with human feedback: Making language models bigger does not inherently make them better at following a user's intent. For example, large language models can generate outputs that are untruthful, toxic, or simply not helpful to the user. In other words, these models are not aligned with their users. In this paper, we show an avenue for aligning language models with user intent on a wide range of tasks by fine-tuning with human feedback. Starting with a set of labeler-written prompts and prompts submitted through the OpenAI API, we collect a dataset of labeler demonstrations of the desired model behavior, which we use to fine-tune GPT-3 using supervised learning. We then collect a dataset of rankings of model outputs, which we use to further fine-tune this supervised model using reinforcement learning from human feedback. We call the resulting models InstructGPT. In human evaluations on our prompt distribution, outputs from the 1.3B parameter InstructGPT model are preferred to outputs from the 175B GPT-3, despite having 100x fewer parameters. Moreover, InstructGPT models show improvements in truthfulness and reductions in toxic output generation while having minimal performance regressions on public NLP datasets. Even though InstructGPT still makes simple mistakes, our results show that fine-tuning with human feedback is a promising direction for aligning language models with human intent. <|reference_end|>",
"<|reference_start|> Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5): For a long time, different recommendation tasks require designing task-specific architectures and training objectives. As a result, it is hard to transfer the knowledge and representations from one task to another, thus restricting the generalization ability of existing recommendation approaches. To deal with such issues, considering that language can describe almost anything and language grounding is a powerful medium to represent various problems or tasks, we present a flexible and unified text-to-text paradigm called “Pretrain, Personalized Prompt, and Predict Paradigm” (P5) for recommendation, which unifies various recommendation tasks in a shared framework. In P5, all data such as user-item interactions, user descriptions, item metadata, and user reviews are converted to a common format — natural language sequences. The rich information from natural language assists P5 to capture deeper semantics for personalization and recommendation. Specifically, P5 learns different tasks with the same language modeling objective during pretraining. Thus, it serves as the foundation model for various downstream recommendation tasks, allows easy integration with other modalities, and enables instruction-based recommendation. P5 advances recommender systems from shallow model to deep model to big model, and will revolutionize the technical form of recommender systems towards universal recommendation engine. With adaptive personalized prompt for different users, P5 is able to make predictions in a zero-shot or few-shot manner and largely reduces the necessity for extensive fine-tuning. On several benchmarks, we conduct experiments to show the effectiveness of P5. To help advance future research on Recommendation as Language Processing (RLP), Personalized Foundation Models (PFM), and Universal Recommendation Engine (URE), we release the source code, dataset, prompts, and pretrained P5 model at https://github.com/jeykigung/P5. <|reference_end|>",
"<|reference_start|> M6-Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems: Industrial recommender systems have been growing increasingly complex, may involve \\emph{diverse domains} such as e-commerce products and user-generated contents, and can comprise \\emph{a myriad of tasks} such as retrieval, ranking, explanation generation, and even AI-assisted content production. The mainstream approach so far is to develop individual algorithms for each domain and each task. In this paper, we explore the possibility of developing a unified foundation model to support \\emph{open-ended domains and tasks} in an industrial recommender system, which may reduce the demand on downstream settings' data and can minimize the carbon footprint by avoiding training a separate model from scratch for every task. Deriving a unified foundation is challenging due to (i) the potentially unlimited set of downstream domains and tasks, and (ii) the real-world systems' emphasis on computational efficiency. We thus build our foundation upon M6, an existing large-scale industrial pretrained language model similar to GPT-3 and T5, and leverage M6's pretrained ability for sample-efficient downstream adaptation, by representing user behavior data as plain texts and converting the tasks to either language understanding or generation. To deal with a tight hardware budget, we propose an improved version of prompt tuning that outperforms fine-tuning with negligible 1\\% task-specific parameters, and employ techniques such as late interaction, early exiting, parameter sharing, and pruning to further reduce the inference time and the model size. We demonstrate the foundation model's versatility on a wide range of tasks such as retrieval, ranking, zero-shot recommendation, explanation generation, personalized content creation, and conversational recommendation, and manage to deploy it on both cloud servers and mobile devices. <|reference_end|>",
"<|reference_start|> GPT4Rec: A Generative Framework for Personalized Recommendation and User Interests Interpretation: Recent advancements in Natural Language Processing (NLP) have led to the development of NLP-based recommender systems that have shown superior performance. However, current models commonly treat items as mere IDs and adopt discriminative modeling, resulting in limitations of (1) fully leveraging the content information of items and the language modeling capabilities of NLP models; (2) interpreting user interests to improve relevance and diversity; and (3) adapting practical circumstances such as growing item inventories. To address these limitations, we present GPT4Rec, a novel and flexible generative framework inspired by search engines. It first generates hypothetical \"search queries\" given item titles in a user's history, and then retrieves items for recommendation by searching these queries. The framework overcomes previous limitations by learning both user and item embeddings in the language space. To well-capture user interests with different aspects and granularity for improving relevance and diversity, we propose a multi-query generation technique with beam search. The generated queries naturally serve as interpretable representations of user interests and can be searched to recommend cold-start items. With GPT-2 language model and BM25 search engine, our framework outperforms state-of-the-art methods by $75.7\\%$ and $22.2\\%$ in Recall@K on two public datasets. Experiments further revealed that multi-query generation with beam search improves both the diversity of retrieved items and the coverage of a user's multi-interests. The adaptiveness and interpretability of generated queries are discussed with qualitative case studies. <|reference_end|>"
] | [
0,
5,
6,
7
] | {"<|multi_cite_1_1|>": "arxiv-185003", "<|multi_cite_1_2|>": "arxiv-218786", "<|multi_cite_1_3|>": "arxiv-277137", "<|multi_cite_1_4|>": "arxiv-428191", "<|cite_2|>": "arxiv-493693", "<|multi_cite_4_1|>": "arxiv-268228", "<|multi_cite_4_2|>": "arxiv-489148", "<|multi_cite_4_3|>": "arxiv-493693", "<|multi_cite_4_4|>": "arxiv-491067", "<|multi_cite_5_1|>": "arxiv-503917", "<|multi_cite_5_2|>": "arxiv-501041", "<|multi_cite_5_3|>": "arxiv-505027", "<|multi_cite_5_4|>": "arxiv-499083", "<|multi_cite_5_5|>": "arxiv-498457", "<|multi_cite_6_1|>": "arxiv-503917", "<|multi_cite_6_2|>": "arxiv-501041", "<|multi_cite_6_3|>": "arxiv-505027", "<|multi_cite_6_4|>": "arxiv-498457", "<|multi_cite_7_1|>": "arxiv-504609", "<|multi_cite_7_2|>": "arxiv-507333", "<|multi_cite_8_2|>": "arxiv-489148", "<|multi_cite_8_4|>": "arxiv-494240", "<|cite_9|>": "arxiv-185003", "<|cite_10|>": "arxiv-292822", "<|multi_cite_11_1|>": "arxiv-185003", "<|multi_cite_11_2|>": "arxiv-218786", "<|multi_cite_11_3|>": "arxiv-277137", "<|multi_cite_11_4|>": "arxiv-428191", "<|cite_12|>": "arxiv-281696", "<|multi_cite_13_1|>": "ss-1523144", "<|multi_cite_13_2|>": "arxiv-275563", "<|multi_cite_13_3|>": "arxiv-249575", "<|multi_cite_13_4|>": "arxiv-436382", "<|multi_cite_13_5|>": "arxiv-389148", "<|multi_cite_14_1|>": "ss-960101", "<|multi_cite_14_2|>": "ss-1320152", "<|multi_cite_14_3|>": "arxiv-386489", "<|multi_cite_15_1|>": "arxiv-185003", "<|multi_cite_15_2|>": "arxiv-218786", "<|multi_cite_15_3|>": "arxiv-428191", "<|multi_cite_16_1|>": "arxiv-218786", "<|multi_cite_16_2|>": "arxiv-277137", "<|multi_cite_17_1|>": "ss-847699", "<|multi_cite_17_2|>": "ss-935364", "<|cite_18|>": "arxiv-345062", "<|multi_cite_19_1|>": "ss-1179370", "<|multi_cite_19_2|>": "arxiv-414211", "<|cite_20|>": "arxiv-428191", "<|cite_21|>": "arxiv-232066", "<|cite_22|>": "arxiv-268228", "<|cite_23|>": "arxiv-185003", "<|cite_24|>": "arxiv-292822", "<|cite_25|>": "arxiv-504609", "<|cite_26|>": "arxiv-507333", "<|multi_cite_27_1|>": "arxiv-411079", "<|multi_cite_27_2|>": "ss-832115", "<|multi_cite_27_3|>": "ss-832115", "<|cite_28|>": "arxiv-244537", "<|multi_cite_29_1|>": "arxiv-374481", "<|multi_cite_29_2|>": "arxiv-403294", "<|multi_cite_30_1|>": "ss-688577", "<|multi_cite_30_2|>": "ss-832115", "<|multi_cite_30_3|>": "arxiv-405699", "<|multi_cite_30_4|>": "arxiv-244537", "<|multi_cite_31_1|>": "ss-739857", "<|multi_cite_31_2|>": "arxiv-419969", "<|multi_cite_32_1|>": "arxiv-495495", "<|multi_cite_32_2|>": "arxiv-495332", "<|multi_cite_32_3|>": "arxiv-498457"} |
2210.08225 | <|paper_start|> Title: Learned Video Compression for YUV 4:2:0 Content Using Flow-based Conditional Inter-frame Coding
Abstract: Learned Video Compression for YUV 4:2:0 Content Using Flow-based Conditional Inter-frame Coding: This paper proposes a learning-based video compression framework for variable-rate coding on YUV 4:2:0 content. Most existing learning-based video compression models adopt the traditional hybrid-based coding architecture, which involves temporal prediction followed by residual coding. However, recent studies have shown that residual coding is sub-optimal from the information-theoretic perspective. In addition, most existing models are optimized with respect to RGB content. Furthermore, they require separate models for variable-rate coding. To address these issues, this work presents an attempt to incorporate the conditional inter-frame coding for YUV 4:2:0 content. We introduce a conditional flow-based inter-frame coder to improve the inter-frame coding efficiency. To adapt our codec to YUV 4:2:0 content, we adopt a simple strategy of using space-to-depth and depth-to-space conversions. Lastly, we employ a rate-adaption net to achieve variable-rate coding without training multiple models. Experimental results show that our model performs better than x265 on UVG and MCL-JCV datasets in terms of PSNR-YUV. However, on the more challenging datasets from ISCAS'22 GC, there is still ample room for improvement. This insufficient performance is due to the lack of inter-frame coding capability at a large GOP size and can be mitigated by increasing the model capacity and applying an error propagation-aware training strategy.
Introduction
Since deep neural networks have demonstrated their great potential in computer vision tasks, learning-based video compression has rapidly risen in recent years. DVC <|cite_start|> (Reference: DVC: An End-to-end Deep Video Compression Framework: Conventional video compression approaches use the predictive coding architecture and encode the corresponding motion information and residual information. In this paper, taking advantage of both classical architecture in the conventional video compression method and the powerful non-linear representation ability of neural networks, we propose the first end-to-end video compression deep model that jointly optimizes all the components for video compression. Specifically, learning based optical flow estimation is utilized to obtain the motion information and reconstruct the current frames. Then we employ two auto-encoder style neural networks to compress the corresponding motion and residual information. All the modules are jointly learned through a single loss function, in which they collaborate with each other by considering the trade-off between reducing the number of compression bits and improving quality of the decoded video. Experimental results show that the proposed approach can outperform the widely used video coding standard H.264 in terms of PSNR and be even on par with the latest standard H.265 in terms of MS-SSIM. Code is released at https://github.com/GuoLusjtu/DVC.) <|cite_end|> is the first work that integrates neural networks with the predictive coding concepts for video compression. Following works like M-LVC <|cite_start|> (Reference: M-LVC: Multiple Frames Prediction for Learned Video Compression: We propose an end-to-end learned video compression scheme for low-latency scenarios. Previous methods are limited in using the previous one frame as reference. Our method introduces the usage of the previous multiple frames as references. In our scheme, the motion vector (MV) field is calculated between the current frame and the previous one. With multiple reference frames and associated multiple MV fields, our designed network can generate more accurate prediction of the current frame, yielding less residual. Multiple reference frames also help generate MV prediction, which reduces the coding cost of MV field. We use two deep auto-encoders to compress the residual and the MV, respectively. To compensate for the compression error of the auto-encoders, we further design a MV refinement network and a residual refinement network, taking use of the multiple reference frames as well. All the modules in our scheme are jointly optimized through a single rate-distortion loss function. We use a step-by-step training strategy to optimize the entire scheme. Experimental results show that the proposed method outperforms the existing learned video compression methods for low-latency mode. Our method also performs better than H.265 in both PSNR and MS-SSIM. Our code and models are publicly available.) <|cite_end|> and HLVC <|cite_start|> (Reference: Learning for Video Compression with Hierarchical Quality and Recurrent Enhancement: In this paper, we propose a Hierarchical Learned Video Compression (HLVC) method with three hierarchical quality layers and a recurrent enhancement network. The frames in the first layer are compressed by an image compression method with the highest quality. Using these frames as references, we propose the Bi-Directional Deep Compression (BDDC) network to compress the second layer with relatively high quality. Then, the third layer frames are compressed with the lowest quality, by the proposed Single Motion Deep Compression (SMDC) network, which adopts a single motion map to estimate the motions of multiple frames, thus saving bits for motion information. In our deep decoder, we develop the Weighted Recurrent Quality Enhancement (WRQE) network, which takes both compressed frames and the bit stream as inputs. In the recurrent cell of WRQE, the memory and update signal are weighted by quality features to reasonably leverage multi-frame information for enhancement. In our HLVC approach, the hierarchical quality benefits the coding efficiency, since the high quality information facilitates the compression and enhancement of low quality frames at encoder and decoder sides, respectively. Finally, the experiments validate that our HLVC approach advances the state-of-the-art of deep video compression methods, and outperforms the "Low-Delay P (LDP) very fast" mode of x265 in terms of both PSNR and MS-SSIM. The project page is at https://github.com/RenYang-home/HLVC.) <|cite_end|> utilize multi-reference frames to improve the coding efficiency. Furthermore, FVC <|cite_start|> (Reference: FVC: A New Framework towards Deep Video Compression in Feature Space: Learning based video compression attracts increasing attention in the past few years. The previous hybrid coding approaches rely on pixel space operations to reduce spatial and temporal redundancy, which may suffer from inaccurate motion estimation or less effective motion compensation. In this work, we propose a feature-space video coding network (FVC) by performing all major operations (i.e., motion estimation, motion compression, motion compensation and residual compression) in the feature space. Specifically, in the proposed deformable compensation module, we first apply motion estimation in the feature space to produce motion information (i.e., the offset maps), which will be compressed by using the auto-encoder style network. Then we perform motion compensation by using deformable convolution and generate the predicted feature. After that, we compress the residual feature between the feature from the current frame and the predicted feature from our deformable compensation module. For better frame reconstruction, the reference features from multiple previous reconstructed frames are also fused by using the non-local attention mechanism in the multi-frame feature fusion module. Comprehensive experimental results demonstrate that the proposed framework achieves the state-of-the-art performance on four benchmark datasets including HEVC, UVG, VTL and MCL-JCV.) <|cite_end|> performs predictive coding operations in the feature domain with the deformable convolution. ELV-VC <|cite_start|> (Reference: ELF-VC: Efficient Learned Flexible-Rate Video Coding: While learned video codecs have demonstrated great promise, they have yet to achieve sufficient efficiency for practical deployment. In this work, we propose several novel ideas for learned video compression which allow for improved performance for the low-latency mode (I- and P-frames only) along with a considerable increase in computational efficiency. In this setting, for natural videos our approach compares favorably across the entire R-D curve under metrics PSNR, MS-SSIM and VMAF against all mainstream video standards (H.264, H.265, AV1) and all ML codecs. At the same time, our approach runs at least 5x faster and has fewer parameters than all ML codecs which report these figures. Our contributions include a flexible-rate framework allowing a single model to cover a large and dense range of bitrates, at a negligible increase in computation and parameter count; an efficient backbone optimized for ML-based codecs; and a novel in-loop flow prediction scheme which leverages prior information towards more efficient compression. We benchmark our method, which we call ELF-VC (Efficient, Learned and Flexible Video Coding) on popular video test sets UVG and MCL-JCV under metrics PSNR, MS-SSIM and VMAF. For example, on UVG under PSNR, it reduces the BD-rate by 44% against H.264, 26% against H.265, 15% against AV1, and 35% against the current best ML codec. At the same time, on an NVIDIA Titan V GPU our approach encodes/decodes VGA at 49/91 FPS, HD 720 at 19/35 FPS, and HD 1080 at 10/18 FPS.) <|cite_end|> proposes to effectively send the incremental flow based on the flow map predictor. Nevertheless, several issues remain unsolved for learning-based video compression.
First of all, the effectiveness of the residual coding is a concern, and the learning-based approach should provide more flexibility than traditional predictive coding. Ladune~\textit{et al.} <|cite_start|> (Reference: Optical Flow and Mode Selection for Learning-based Video Coding: This paper introduces a new method for inter-frame coding based on two complementary autoencoders: MOFNet and CodecNet. MOFNet aims at computing and conveying the Optical Flow and a pixel-wise coding Mode selection. The optical flow is used to perform a prediction of the frame to code. The coding mode selection enables competition between direct copy of the prediction or transmission through CodecNet. The proposed coding scheme is assessed under the Challenge on Learned Image Compression 2020 (CLIC20) P-frame coding conditions, where it is shown to perform on par with the state-of-the-art video codec ITU/MPEG HEVC. Moreover, the possibility of copying the prediction enables to learn the optical flow in an end-to-end fashion i.e. without relying on pre-training and/or a dedicated loss term.) <|cite_end|> first point out the inefficiency of the residual coding from the perspective of information theory. They explain that given the motion-compensated frame $x_c$ for coding the target frame $x_t$, the expected entropy of residual coding should be greater than or equal to the conditional coding
$H(x_t - x_c) \geq H(x_t - x_c | x_c) = H(x_t | x_c)$. To this end, they propose to use conditional VAE that concatenates the motion-compensated frame with the target frame and the latent features in the encoding and decoding processes. DCVC <|cite_start|> (Reference: Deep Contextual Video Compression: Most of the existing neural video compression methods adopt the predictive coding framework, which first generates the predicted frame and then encodes its residue with the current frame. However, as for compression ratio, predictive coding is only a sub-optimal solution as it uses simple subtraction operation to remove the redundancy across frames. In this paper, we propose a deep contextual video compression framework to enable a paradigm shift from predictive coding to conditional coding. In particular, we try to answer the following questions: how to define, use, and learn condition under a deep video compression framework. To tap the potential of conditional coding, we propose using feature domain context as condition. This enables us to leverage the high dimension context to carry rich information to both the encoder and the decoder, which helps reconstruct the high-frequency contents for higher video quality. Our framework is also extensible, in which the condition can be flexibly designed. Experiments show that our method can significantly outperform the previous state-of-the-art (SOTA) deep video compression methods. When compared with x265 using veryslow preset, we can achieve 26.0% bitrate saving for 1080P standard test videos.) <|cite_end|> improves Ladune's work by replacing the motion-compensated frame with its latent representation. Additionally, a conditional temporal prior is introduced for better entropy coding. However, how to effectively use conditional information is still an issue to be discussed.
Secondly, the use of a single model to implement variable-rate coding and rate control is also a challenge for learning-based video compression. Most learned video compression methods can only be optimized for a single rate point and will cause high memory consumption. Choi~\textit{et al.} <|cite_start|> (Reference: Variable Rate Deep Image Compression With a Conditional Autoencoder: In this paper, we propose a novel variable-rate learned image compression framework with a conditional autoencoder. Previous learning-based image compression methods mostly require training separate networks for different compression rates so they can yield compressed images of varying quality. In contrast, we train and deploy only one variable-rate image compression network implemented with a conditional autoencoder. We provide two rate control parameters, i.e., the Lagrange multiplier and the quantization bin size, which are given as conditioning variables to the network. Coarse rate adaptation to a target is performed by changing the Lagrange multiplier, while the rate can be further fine-tuned by adjusting the bin size used in quantizing the encoded representation. Our experimental results show that the proposed scheme provides a better rate-distortion trade-off than the traditional variable-rate image compression codecs such as JPEG2000 and BPG. Our model also shows comparable and sometimes better performance than the state-of-the-art learned image compression models that deploy multiple networks trained for varying rates.) <|cite_end|> propose a multi-rate image compression network with conditional convolution. Conditional convolution performs channel-wise scaling and shifting of the intermediate features. By replacing each convolutional layer with conditional convolution (CConv), it reprograms the feature to adapt to different dynamic ranges. For video compression, Lin~\textit{et al.} <|cite_start|> (Reference: A deeply modulated scheme for variable-rate video compression: Rate adaption is one of the decisive factors for the applications of video compression. Previous deep video compression methods are usually optimized for a single fixed rate-distortion (R-D) tradeoff. While they can achieve multiple bitrates by training multiple independent models, the achievable bitrates are limited to several discrete points on the R-D curve and the storage cost increases proportionally to the number of models. We propose a variable-rate scheme for deep video compression, which can achieve continuously variable rate by a single model, i.e., reaching any point on the R-D curve. In our scheme, two deep auto-encoders are used to compress the residual and the motion vector field respectively, which directly generate the final bitstream. The basic rate adaptation can be achieved by using the R-D tradeoff parameter to deeply modulate all the internal feature maps of the auto-encoders. In addition, other modules in our scheme, notably motion estimation and motion compensation, also affect the final bitrate indirectly. We further use the R-D tradeoff parameter to modulate them via a conditional map, thereby effectively improving the compression efficiency. We use a multi-rate-distortion loss function together with a step-by-step training strategy to optimize the entire scheme. The experimental results show the proposed scheme achieves continuously variable rate by a single model with almost the same compression efficiency as multiple fixed-rate models. The additional parameters and computation of our model are negligible when compared with a single fixed-rate model.) <|cite_end|> further apply the similar technique but without the shifting operation to both motion and residual coder. Though these research provide solutions for variable-rate image and video compression, it is still unable to achieve precise rate control.
Finally, most learning-based compression models operate on RGB color space, and YUV color format is more popular among actual video standards. To obtain better coding efficiency, how to deal with YUV 4:2:0 input format for learning-based video compression is still an open question.
Considering all the above issues, we propose a conditional flow-based video compression framework that uses YUV 4:2:0 video as input format. Our framework can also use only one model to adapt to multiple bit rates, and it can be extended to achieve rate control. The experimental results show that our method performs better than x265 on UVG <|cite_start|> (Reference: {UVG Dataset: 50/120fps 4K Sequences for Video Codec Analysis and Development: This paper provides an overview of our open Ultra Video Group (UVG) dataset that is composed of 16 versatile 4K (3840×2160) test video sequences. These natural sequences were captured either at 50 or 120 frames per second (fps) and stored online in raw 8-bit and 10-bit 4:2:0 YUV formats. The dataset is published on our website (ultravideo.cs.tut.fi) under a non-commercial Creative Commons BY-NC license. In this paper, all UVG sequences are described in detail and characterized by their spatial and temporal perceptual information, rate-distortion behavior, and coding complexity with the latest HEVC/H.265 and VVC/H.266 reference video codecs. The proposed dataset is the first to provide complementary 4K sequences up to 120 fps and is therefore particularly valuable for cutting-edge multimedia applications. Our evaluations also show that it comprehensively complements the existing 4K test set in VVC standardization, so we recommend including it in subjective and objective quality assessments of next-generation VVC codecs.) <|cite_end|> and MCL-JCV <|cite_start|> (Reference: {MCL-JCV: a JND-based H. 264/AVC video quality assessment dataset: A compressed video quality assessment dataset based on the just noticeable difference (JND) model, called MCL-JCV, is recently constructed and released. In this work, we explain its design objectives, selected video content and subject test procedures. Then, we conduct statistical analysis on collected JND data. We compute the difference between every two adjacent JND points and propose an outlier detection algorithm to remove unreliable data. We also show that each JND difference group can be well approximated by a normal distribution so that we can adopt the Gaussian mixture model (GMM) to characterize the distribution of multiple JND points. Finally, it is demonstrated by experimental results that the proposed JND analysis performed in the difference domain, called the D-method, achieves a lower BIC (Bayesian information criteria) value than the previously proposed G-method.) <|cite_end|> datasets in terms of PSNR-YUV.
However, on the more challenging datasets from ISCAS'22 GC, there is still ample room for improvement. We believe that this inferior performance is due to insufficient inter-frame coding at a large GOP size, which can be improved by increasing the model capacity and applying an error propagation-aware training strategy. <|paper_end|> | [
"<|reference_start|> FVC: A New Framework towards Deep Video Compression in Feature Space: Learning based video compression attracts increasing attention in the past few years. The previous hybrid coding approaches rely on pixel space operations to reduce spatial and temporal redundancy, which may suffer from inaccurate motion estimation or less effective motion compensation. In this work, we propose a feature-space video coding network (FVC) by performing all major operations (i.e., motion estimation, motion compression, motion compensation and residual compression) in the feature space. Specifically, in the proposed deformable compensation module, we first apply motion estimation in the feature space to produce motion information (i.e., the offset maps), which will be compressed by using the auto-encoder style network. Then we perform motion compensation by using deformable convolution and generate the predicted feature. After that, we compress the residual feature between the feature from the current frame and the predicted feature from our deformable compensation module. For better frame reconstruction, the reference features from multiple previous reconstructed frames are also fused by using the non-local attention mechanism in the multi-frame feature fusion module. Comprehensive experimental results demonstrate that the proposed framework achieves the state-of-the-art performance on four benchmark datasets including HEVC, UVG, VTL and MCL-JCV. <|reference_end|>",
"<|reference_start|> ELF-VC: Efficient Learned Flexible-Rate Video Coding: While learned video codecs have demonstrated great promise, they have yet to achieve sufficient efficiency for practical deployment. In this work, we propose several novel ideas for learned video compression which allow for improved performance for the low-latency mode (I- and P-frames only) along with a considerable increase in computational efficiency. In this setting, for natural videos our approach compares favorably across the entire R-D curve under metrics PSNR, MS-SSIM and VMAF against all mainstream video standards (H.264, H.265, AV1) and all ML codecs. At the same time, our approach runs at least 5x faster and has fewer parameters than all ML codecs which report these figures. Our contributions include a flexible-rate framework allowing a single model to cover a large and dense range of bitrates, at a negligible increase in computation and parameter count; an efficient backbone optimized for ML-based codecs; and a novel in-loop flow prediction scheme which leverages prior information towards more efficient compression. We benchmark our method, which we call ELF-VC (Efficient, Learned and Flexible Video Coding) on popular video test sets UVG and MCL-JCV under metrics PSNR, MS-SSIM and VMAF. For example, on UVG under PSNR, it reduces the BD-rate by 44% against H.264, 26% against H.265, 15% against AV1, and 35% against the current best ML codec. At the same time, on an NVIDIA Titan V GPU our approach encodes/decodes VGA at 49/91 FPS, HD 720 at 19/35 FPS, and HD 1080 at 10/18 FPS. <|reference_end|>",
"<|reference_start|> Deep Contextual Video Compression: Most of the existing neural video compression methods adopt the predictive coding framework, which first generates the predicted frame and then encodes its residue with the current frame. However, as for compression ratio, predictive coding is only a sub-optimal solution as it uses simple subtraction operation to remove the redundancy across frames. In this paper, we propose a deep contextual video compression framework to enable a paradigm shift from predictive coding to conditional coding. In particular, we try to answer the following questions: how to define, use, and learn condition under a deep video compression framework. To tap the potential of conditional coding, we propose using feature domain context as condition. This enables us to leverage the high dimension context to carry rich information to both the encoder and the decoder, which helps reconstruct the high-frequency contents for higher video quality. Our framework is also extensible, in which the condition can be flexibly designed. Experiments show that our method can significantly outperform the previous state-of-the-art (SOTA) deep video compression methods. When compared with x265 using veryslow preset, we can achieve 26.0% bitrate saving for 1080P standard test videos. <|reference_end|>",
"<|reference_start|> A deeply modulated scheme for variable-rate video compression: Rate adaption is one of the decisive factors for the applications of video compression. Previous deep video compression methods are usually optimized for a single fixed rate-distortion (R-D) tradeoff. While they can achieve multiple bitrates by training multiple independent models, the achievable bitrates are limited to several discrete points on the R-D curve and the storage cost increases proportionally to the number of models. We propose a variable-rate scheme for deep video compression, which can achieve continuously variable rate by a single model, i.e., reaching any point on the R-D curve. In our scheme, two deep auto-encoders are used to compress the residual and the motion vector field respectively, which directly generate the final bitstream. The basic rate adaptation can be achieved by using the R-D tradeoff parameter to deeply modulate all the internal feature maps of the auto-encoders. In addition, other modules in our scheme, notably motion estimation and motion compensation, also affect the final bitrate indirectly. We further use the R-D tradeoff parameter to modulate them via a conditional map, thereby effectively improving the compression efficiency. We use a multi-rate-distortion loss function together with a step-by-step training strategy to optimize the entire scheme. The experimental results show the proposed scheme achieves continuously variable rate by a single model with almost the same compression efficiency as multiple fixed-rate models. The additional parameters and computation of our model are negligible when compared with a single fixed-rate model. <|reference_end|>"
] | [
3,
4,
6,
8
] | {"<|cite_1|>": "arxiv-182703", "<|cite_2|>": "arxiv-260747", "<|cite_3|>": "arxiv-251938", "<|cite_4|>": "arxiv-342162", "<|cite_5|>": "arxiv-337704", "<|cite_6|>": "arxiv-283035", "<|cite_7|>": "arxiv-370553", "<|cite_8|>": "arxiv-223023", "<|cite_9|>": "ss-2113822", "<|cite_10|>": "ss-726154", "<|cite_11|>": "ss-726155"} |
2203.02622 | <|paper_start|> Title: Scaling R-GCN Training with Graph Summarization
Abstract: Scaling R-GCN Training with Graph Summarization: Training of Relational Graph Convolutional Networks (R-GCN) is a memory intense task. The amount of gradient information that needs to be stored during training for real-world graphs is often too large for the amount of memory available on most GPUs. In this work, we experiment with the use of graph summarization techniques to compress the graph and hence reduce the amount of memory needed. After training the R-GCN on the graph summary, we transfer the weights back to the original graph and attempt to perform inference on it. We obtain reasonable results on the AIFB, MUTAG and AM datasets. Our experiments show that training on the graph summary can yield a comparable or higher accuracy to training on the original graphs.Furthermore, if we take the time to compute the summary out of the equation, we observe that the smaller graph representations obtained with graph summarization methods reduces the computational overhead. However, further experiments are needed to evaluate additional graph summary models and whether our findings also holds true for very large graphs.
Introduction
Knowledge Graphs (KG) emerged as an abstraction to represent and exploit complex data, and to ease accessibility <|cite_start|> (Reference: Introduction: What Is a Knowledge Graph?: ) <|cite_end|>.
As a result, extensive collections of data stored in KGs are now publicly available, spurring the interest in investigating novel technologies that aim to structure and analyze such data.
However, KGs, including well-known ones such as DBpedia and WikiData, remain incomplete.
There is an evident trade-off between the quantity of data available and its adequate coverage (completeness) <|cite_start|> (Reference: kgbench: A Collection of Knowledge Graph Datasets for Evaluating Relational and Multimodal Machine Learning: ) <|cite_end|>.
Predicting missing information in KGs, e.g., predicting missing links, is the main focus of statistical relational learning <|cite_start|> (Reference: Modeling Relational Data with Graph Convolutional Networks: Knowledge graphs enable a wide variety of applications, including question answering and information retrieval. Despite the great effort invested in their creation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata) remain incomplete. We introduce Relational Graph Convolutional Networks (R-GCNs) and apply them to two standard knowledge base completion tasks: Link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing entity attributes). R-GCNs are related to a recent class of neural networks operating on graphs, and are developed specifically to deal with the highly multi-relational data characteristic of realistic knowledge bases. We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification. We further show that factorization models for link prediction such as DistMult can be significantly improved by enriching them with an encoder model to accumulate evidence over multiple inference steps in the relational graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline.) <|cite_end|>.
However, such methods are challenged by large quantities of data and the unknown structure of KGs, harming the scalability of applications <|cite_start|> (Reference: A Collection of Benchmark Datasets for Systematic Evaluations of Machine Learning on the Semantic Web: ) <|cite_end|>.
Several techniques have been developed to tackle these larger graphs while still retaining their structure.
First, training of GCNs can be scaled by developing a memory-optimized implementation or distributing the computations over multiple GPUs <|cite_start|> (Reference: DistGNN: Scalable Distributed Training for Large-Scale Graph Neural Networks: Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible. It is challenging due to large memory capacity and bandwidth requirements on a single compute node and high communication volumes across multiple nodes. In this paper, we present DistGNN that optimizes the well-known Deep Graph Library (DGL) for full-batch training on CPU clusters via an efficient shared memory implementation, communication reduction using a minimum vertex-cut graph partitioning algorithm and communication avoidance using a family of delayed-update algorithms. Our results on four common GNN benchmark datasets: Reddit, OGB-Products, OGB-Papers and Proteins, show up to 3.7x speed-up using a single CPU socket and up to 97x speed-up using 128 CPU sockets, respectively, over baseline DGL implementations running on a single CPU socket) <|cite_end|>.
Second, some techniques try to work with condensed representation of the KG that retains the original structure of the graph.
For example, Salha et al. <|cite_start|> (Reference: A Degeneracy Framework for Scalable Graph Autoencoders: In this paper, we present a general framework to scale graph autoencoders (AE) and graph variational autoencoders (VAE). This framework leverages graph degeneracy concepts to train models only from a dense subset of nodes instead of using the entire graph. Together with a simple yet effective propagation mechanism, our approach significantly improves scalability and training speed while preserving performance. We evaluate and discuss our method on several variants of existing graph AE and VAE, providing the first application of these models to large graphs with up to millions of nodes and edges. We achieve empirically competitive results w.r.t. several popular scalable node embedding methods, which emphasizes the relevance of pursuing further research towards more scalable graph AE and VAE.) <|cite_end|> proposed to use a highly dense subset of nodes from the original graph.
Deng and his co-authors <|cite_start|> (Reference: GraphZoom: A multi-level spectral approach for accurate and scalable graph embedding: Graph embedding techniques have been increasingly deployed in a multitude of different applications that involve learning on non-Euclidean data. However, existing graph embedding models either fail to incorporate node attribute information during training or suffer from node attribute noise, which compromises the accuracy. Moreover, very few of them scale to large graphs due to their high computational complexity and memory usage. In this paper we propose GraphZoom, a multi-level framework for improving both accuracy and scalability of unsupervised graph embedding algorithms. GraphZoom first performs graph fusion to generate a new graph that effectively encodes the topology of the original graph and the node attribute information. This fused graph is then repeatedly coarsened into much smaller graphs by merging nodes with high spectral similarities. GraphZoom allows any existing embedding methods to be applied to the coarsened graph, before it progressively refine the embeddings obtained at the coarsest level to increasingly finer graphs. We have evaluated our approach on a number of popular graph datasets for both transductive and inductive tasks. Our experiments show that GraphZoom can substantially increase the classification accuracy and significantly accelerate the entire graph embedding process by up to 40.8x, when compared to the state-of-the-art unsupervised embedding methods.) <|cite_end|> suggest the creation of a fused graph that embeds the topology of the original graph and in turn recursively coarsens into smaller graphs for a number of iterations.
The methods have been shown to improve classification accuracy and accelerate the graph embedding process <|cite_start|> (Reference: GraphZoom: A multi-level spectral approach for accurate and scalable graph embedding: Graph embedding techniques have been increasingly deployed in a multitude of different applications that involve learning on non-Euclidean data. However, existing graph embedding models either fail to incorporate node attribute information during training or suffer from node attribute noise, which compromises the accuracy. Moreover, very few of them scale to large graphs due to their high computational complexity and memory usage. In this paper we propose GraphZoom, a multi-level framework for improving both accuracy and scalability of unsupervised graph embedding algorithms. GraphZoom first performs graph fusion to generate a new graph that effectively encodes the topology of the original graph and the node attribute information. This fused graph is then repeatedly coarsened into much smaller graphs by merging nodes with high spectral similarities. GraphZoom allows any existing embedding methods to be applied to the coarsened graph, before it progressively refine the embeddings obtained at the coarsest level to increasingly finer graphs. We have evaluated our approach on a number of popular graph datasets for both transductive and inductive tasks. Our experiments show that GraphZoom can substantially increase the classification accuracy and significantly accelerate the entire graph embedding process by up to 40.8x, when compared to the state-of-the-art unsupervised embedding methods.) <|cite_end|>.
In this work, we propose to use graph summarization techniques to create a condensed representation of the KG, i.\,e., the graph summary.
In our approach, we train an R-GCN on the graph summary and, subsequently, transfer the obtained weights to an R-GCN based on the original KG.
Finally, we evaluate the later R-GCN and then investigate how its performance behaves when trained further, compared to an R-GCN based on the full KG that was not pre-trained on a summary.
From our experiments we observe that training on the graph summary can yield a comparable or higher accuracy to training on the original graph.
Furthermore, we show that the smaller graph representations obtained with graph summarization methods reduces the computational overhead if we do not incorporate the time needed for the summarization. <|paper_end|> | [
"<|reference_start|> Modeling Relational Data with Graph Convolutional Networks: Knowledge graphs enable a wide variety of applications, including question answering and information retrieval. Despite the great effort invested in their creation and maintenance, even the largest (e.g., Yago, DBPedia or Wikidata) remain incomplete. We introduce Relational Graph Convolutional Networks (R-GCNs) and apply them to two standard knowledge base completion tasks: Link prediction (recovery of missing facts, i.e. subject-predicate-object triples) and entity classification (recovery of missing entity attributes). R-GCNs are related to a recent class of neural networks operating on graphs, and are developed specifically to deal with the highly multi-relational data characteristic of realistic knowledge bases. We demonstrate the effectiveness of R-GCNs as a stand-alone model for entity classification. We further show that factorization models for link prediction such as DistMult can be significantly improved by enriching them with an encoder model to accumulate evidence over multiple inference steps in the relational graph, demonstrating a large improvement of 29.8% on FB15k-237 over a decoder-only baseline. <|reference_end|>",
"<|reference_start|> A Collection of Benchmark Datasets for Systematic Evaluations of Machine Learning on the Semantic Web: <|reference_end|>",
"<|reference_start|> GraphZoom: A multi-level spectral approach for accurate and scalable graph embedding: Graph embedding techniques have been increasingly deployed in a multitude of different applications that involve learning on non-Euclidean data. However, existing graph embedding models either fail to incorporate node attribute information during training or suffer from node attribute noise, which compromises the accuracy. Moreover, very few of them scale to large graphs due to their high computational complexity and memory usage. In this paper we propose GraphZoom, a multi-level framework for improving both accuracy and scalability of unsupervised graph embedding algorithms. GraphZoom first performs graph fusion to generate a new graph that effectively encodes the topology of the original graph and the node attribute information. This fused graph is then repeatedly coarsened into much smaller graphs by merging nodes with high spectral similarities. GraphZoom allows any existing embedding methods to be applied to the coarsened graph, before it progressively refine the embeddings obtained at the coarsest level to increasingly finer graphs. We have evaluated our approach on a number of popular graph datasets for both transductive and inductive tasks. Our experiments show that GraphZoom can substantially increase the classification accuracy and significantly accelerate the entire graph embedding process by up to 40.8x, when compared to the state-of-the-art unsupervised embedding methods. <|reference_end|>",
"<|reference_start|> GraphZoom: A multi-level spectral approach for accurate and scalable graph embedding: Graph embedding techniques have been increasingly deployed in a multitude of different applications that involve learning on non-Euclidean data. However, existing graph embedding models either fail to incorporate node attribute information during training or suffer from node attribute noise, which compromises the accuracy. Moreover, very few of them scale to large graphs due to their high computational complexity and memory usage. In this paper we propose GraphZoom, a multi-level framework for improving both accuracy and scalability of unsupervised graph embedding algorithms. GraphZoom first performs graph fusion to generate a new graph that effectively encodes the topology of the original graph and the node attribute information. This fused graph is then repeatedly coarsened into much smaller graphs by merging nodes with high spectral similarities. GraphZoom allows any existing embedding methods to be applied to the coarsened graph, before it progressively refine the embeddings obtained at the coarsest level to increasingly finer graphs. We have evaluated our approach on a number of popular graph datasets for both transductive and inductive tasks. Our experiments show that GraphZoom can substantially increase the classification accuracy and significantly accelerate the entire graph embedding process by up to 40.8x, when compared to the state-of-the-art unsupervised embedding methods. <|reference_end|>"
] | [
2,
3,
6,
7
] | {"<|cite_1|>": "ss-789482", "<|cite_3|>": "ss-770694", "<|cite_4|>": "arxiv-119366", "<|cite_5|>": "ss-778891", "<|cite_6|>": "arxiv-334366", "<|cite_7|>": "arxiv-192617", "<|cite_8|>": "arxiv-227356", "<|cite_9|>": "arxiv-227356"} |
1911.07497 | <|paper_start|> Title: Deterministic partial binary circulant compressed sensing matrices
Abstract: Deterministic partial binary circulant compressed sensing matrices: Compressed sensing (CS) is a signal acquisition paradigm to simultaneously acquire and reduce dimension of signals that admit sparse representation. This is achieved by collecting linear, non-adaptive measurements of a signal, which can be formalized as multiplying the signal with a "measurement matrix". Most of matrices used in CS are random matrices as they satisfy the restricted isometry property (RIP) in an optimal regime of number of measurements with high probability. However, these matrices have their own caveats and for this reason, deterministic measurement matrices have been proposed. While there is a wide classes of deterministic matrices in the literature, we propose a novel class of deterministic matrices using the Legendre symbol. This construction has a simple structure, it enjoys being a binary matrix, and having a partial circulant structure which provides a fast matrix-vector multiplication and a fast reconstruction algorithm. We will derive a bound on the sparsity level of signals that can be measured (and be reconstructed) with this class of matrices. We perform quantization using these matrices, and we verify the performance of these matrices (and compare with other existing constructions) numerically.
Introduction
In this paper, we present a novel construction for deterministic CS matrices based on decimated Legendre sequences. As we know, Legendre sequence provides a binary sequence with $\pm 1$ entries which initially seems ideal to use in the context of CS. However, in order to be able to use these sequences as rows or columns of a measurement matrix, one has to guarantee a low maximum correlation between two such sequences. This was done first by Zhang et al. in 2002 (before the birth of CS) in the context of coding theory, and by considering \textit{decimated} Legendre sequences. The use of Legendre symbol in CS with random matrices was proposed by Bandeira et al. in 2016. The summary of their work is given in Section \ref{sumlegendre}. In the same year, Zhang et al. <|cite_start|> (Reference: Deterministic bipolar measurement matrices with flexible sizes from Legendre sequence: A deterministic method to construct bipolar measurement matrices for compressed sensing is proposed based on Legendre sequences. The novel matrices have remarkably flexible measurement sizes, relatively low coherence and show empirically good performance compared with Gaussian matrices.) <|cite_end|> proposed the use of Legendre symbol for the construction of deterministic CS matrices. In fact, their construction was based on their previous work in 2002 which was done in the context of coding theory, and has the feature of being binary, and having low coherence. Moreover, since any prime number can be used as the number of measurements, the difference between size of two adjacent matrices in their construction is low compared to many other deterministic constructions.
As outlined below, another important feature that a CS matrix can have is the \textit{circulant matrix} structure (see Section \ref{circulantmat}). The use of circulant matrices in random CS was first proposed by Bajwa et al. <|cite_start|> (Reference: Toeplitz-structured compressed sensing matrices: The problem of recovering a sparse signal x Rn from a relatively small number of its observations of the form y = Ax Rk, where A is a known matrix and k « n, has recently received a lot of attention under the rubric of compressed sensing (CS) and has applications in many areas of signal processing such as data cmpression, image processing, dimensionality reduction, etc. Recent work has established that if A is a random matrix with entries drawn independently from certain probability distributions then exact recovery of x from these observations can be guaranteed with high probability. In this paper, we show that Toeplitz-structured matrices with entries drawn independently from the same distributions are also sufficient to recover x from y with high probability, and we compare the performance of such matrices with that of fully independent and identically distributed ones. The use of Toeplitz matrices in CS applications has several potential advantages: (i) they require the generation of only O(n) independent random variables; (ii) multiplication with Toeplitz matrices can be efficiently implemented using fast Fourier transform, resulting in faster acquisition and reconstruction algorithms; and (iii) Toeplitz-structured matrices arise naturally in certain application areas such as system identification.) <|cite_end|> in 2007. An equivalent approach was proposed by Romberg <|cite_start|> (Reference: Compressive sensing by random convolution: This paper demonstrates that convolution with random waveform followed by random time-domain subsampling is a universally efficient compressive sensing strategy. We show that an $n$-dimensional signal which is $S$-sparse in any fixed orthonormal representation can be recovered from $m\gtrsim S\log n$ samples from its convolution with a pulse whose Fourier transform has unit magnitude and random phase at all frequencies. The time-domain subsampling can be done in one of two ways: in the first, we simply observe $m$ samples of the random convolution; in the second, we break the random convolution into $m$ blocks and summarize each with a single randomized sum. We also discuss several imaging applications where convolution with a random pulse allows us to superresolve fine-scale features, allowing us to recover high-resolution signals from low-resolution measurements.) <|cite_end|> in 2009. In the latter approach, given a signal $x \in \mathbb{R}^n$, first a \textit{convolution} of $x$, of the form $Hx$ is considered, where $H$ can be written in the form of $$H=\frac{1}{\sqrt{n}} F^* \Sigma F$$ Here, $n$ is as usual the ambient dimension, $F$ is the discrete Fourier matrix, and $\Sigma$ is a diagonal matrix whose diagonal entries are complex numbers with unit norm, and random phases. Following this step, we subsample the measurements. Therefore, the measurement matrix in this approach can be written as $\Phi=R_{\Omega} H$, where $\Omega \subseteq \{1,2,\cdots,n\}$ is a set with $m$ elements, and $R_{\Omega}$ is a sampling operator that restricts the rows to a random set $\Omega$, i.e., a random choice of a set $\Omega$ among all possible $\binom{n}{m}$ such sets. Based on this idea, Li et al. <|cite_start|> (Reference: Convolutional Compressed Sensing Using Deterministic Sequences: In this paper, a new class of circulant matrices built from deterministic sequences is proposed for convolution-based compressed sensing (CS). In contrast to random convolution, the coefficients of the underlying filter are given by the discrete Fourier transform of a deterministic sequence with good autocorrelation. Both uniform recovery and non-uniform recovery of sparse signals are investigated, based on the coherence parameter of the proposed sensing matrices. Many examples of the sequences are investigated, particularly the Frank-Zadoff-Chu (FZC) sequence, the \textit{m}-sequence and the Golay sequence. A salient feature of the proposed sensing matrices is that they can not only handle sparse signals in the time domain, but also those in the frequency and/or or discrete-cosine transform (DCT) domain.) <|cite_end|> considered a measurement matrix of the form $\Phi=R_{\Omega} A$, where $A$ is a deterministic matrix. Since $R_{\Omega}$ here is a random sampling operator, the measurement matrix in their construction can not be considered as \textit{deterministic} yet. To the best of our knowledge, the only class of \textit{circulant deterministic} matrices considered in the literature so far is the class of matrices introduced by Cui <|cite_start|> (Reference: Construction of Deterministic Measurements Matrix Using Decimated Legendre Sequences: This paper proposed and constructed a new class of deterministic measurements matrix by using decimated binary Legendre sequences for convolutional Compressed Sensing. The author proves that when the measurement matrix is constructed by a random subsampling, it can offer a stable sparse reconstruction. Besides, the simulation results shows that when a deterministic subsampler is used, the proposed matrix can also guarantee the stable reconstruction as good as random Gaussian or Bernoulli matrixes do, which are commonly used in CS.) <|cite_end|>. In their paper, they constructed the circulant matrix $A$ by first writing it of the form $A=\frac{1}{\sqrt{n}} F^* \mbox{diag}(\sigma) F$, then, considering the sequence $\sigma$ as a \textit{decimated Legendre} sequence, and finally considering the \textit{first} $m$ rows of $A$ as the measurement matrix. They show \textit{empirically} that this matrix performs very well as a measurement matrix, but no proof was given in this regard. Other \textit{deterministic binary} matrices given in the literature in the context of CS include DeVore construction, and the constructions given in <|cite_start|> (Reference: Deterministic construction of compressed sensing matrices via algebraic curves: Compressed sensing is a sampling technique which provides a fundamentally new approach to data acquisition. Comparing with traditional methods, compressed sensing makes full use of sparsity so that a sparse signal can be reconstructed from very few measurements. A central problem in compressed sensing is the construction of sensing matrices. While random sensing matrices have been studied intensively, only a few deterministic constructions are known. Inspired by algebraic geometry codes, we introduce a new deterministic construction via algebraic curves over finite fields, which is a natural generalization of DeVore's construction using polynomials over finite fields. The diversity of algebraic curves provides numerous choices for sensing matrices. By choosing appropriate curves, we are able to construct binary sensing matrices which are superior to Devore's ones. We hope this connection between algebraic geometry and compressed sensing will provide a new point of view and stimulate further research in both areas.) <|cite_end|> <|cite_start|> (Reference: Deterministic compressed sensing matrices: Construction via Euler Squares and applications: In Compressed Sensing the matrices that satisfy the Restricted Isometry Property (RIP) play an important role. But to date, very few results for designing such matrices are available. For applications such as multiplier-less data compression, binary sensing matrices are of interest. The present work constructs deterministic and binary sensing matrices using Euler Squares. In particular, given a positive integer $m$ different from $p, p^2$ for a prime $p$, we show that it is possible to construct a binary sensing matrix of size $m \times c (m\mu)^2$, where $\mu$ is the coherence parameter of the matrix and $c \in [1,2)$. The matrices that we construct have smaller density (that is, percentage of nonzero entries in the matrix is small) with no function evaluation in their construction, which support algorithms with low computational complexity. Through experimental work, we show that our binary sensing matrices can be used for such applications as content based image retrieval. Our simulation results demonstrate that the Euler Square based CS matrices give better performance than their Gaussian counterparts.) <|cite_end|> <|cite_start|> (Reference: Deterministic Construction of Binary, Bipolar and Ternary Compressed Sensing Matrices: In this paper we establish the connection between the Orthogonal Optical Codes (OOC) and binary compressed sensing matrices. We also introduce deterministic bipolar $m\times n$ RIP fulfilling $\pm 1$ matrices of order $k$ such that $m\leq\mathcal{O}\big(k (\log_2 n)^{\frac{\log_2 k}{\ln \log_2 k}}\big)$. The columns of these matrices are binary BCH code vectors where the zeros are replaced by -1. Since the RIP is established by means of coherence, the simple greedy algorithms such as Matching Pursuit are able to recover the sparse solution from the noiseless samples. Due to the cyclic property of the BCH codes, we show that the FFT algorithm can be employed in the reconstruction methods to considerably reduce the computational complexity. In addition, we combine the binary and bipolar matrices to form ternary sensing matrices ($\{0,1,-1\}$ elements) that satisfy the RIP condition.) <|cite_end|>.
To the best of our knowledge, the construction we introduce in this chapter is the \textit{first} deterministic binary circulant construction in CS which is proved to have low coherence, and hence, can be used for recovery of sparse signals. Compared to the work of Cui <|cite_start|> (Reference: Construction of Deterministic Measurements Matrix Using Decimated Legendre Sequences: This paper proposed and constructed a new class of deterministic measurements matrix by using decimated binary Legendre sequences for convolutional Compressed Sensing. The author proves that when the measurement matrix is constructed by a random subsampling, it can offer a stable sparse reconstruction. Besides, the simulation results shows that when a deterministic subsampler is used, the proposed matrix can also guarantee the stable reconstruction as good as random Gaussian or Bernoulli matrixes do, which are commonly used in CS.) <|cite_end|>, in addition to giving a proof for why the construction can be used in CS, our construction has the advantage of having a \textit{simple}, \textit{explicit} formula for each entry of the measurement matrix itself (as opposed to its diagonalization). The circulant structure of our construction allows us to perform a fast matrix-vector multiplication, and a fast recovery algorithm. Moreover, our construction has a small difference between the sizes of two adjacent matrices. (as we will see below, the number of measurements in our construction is chosen as $ \lceil p^{3/4} \rceil $, where $n=p$ is a prime number and is assumed to be the ambient dimension). Lastly, we will show that we can perform the one-stage recovery $\Sigma \Delta$ quantization as given in <|cite_start|> (Reference: Quantization of compressive samples with stable and robust recovery: In this paper we study the quantization stage that is implicit in any compressed sensing signal acquisition paradigm. We propose using Sigma-Delta quantization and a subsequent reconstruction scheme based on convex optimization. We prove that the reconstruction error due to quantization decays polynomially in the number of measurements. Our results apply to arbitrary signals, including compressible ones, and account for measurement noise. Additionally, they hold for sub-Gaussian (including Gaussian and Bernoulli) random compressed sensing measurements, as well as for both high bit-depth and coarse quantizers, and they extend to 1-bit quantization. In the noise-free case, when the signal is strictly sparse we prove that by optimizing the order of the quantization scheme one can obtain root-exponential decay in the reconstruction error due to quantization.) <|cite_end|> and generalized in using our construction. Similar to the constructions given in <|cite_start|> (Reference: Construction of Deterministic Measurements Matrix Using Decimated Legendre Sequences: This paper proposed and constructed a new class of deterministic measurements matrix by using decimated binary Legendre sequences for convolutional Compressed Sensing. The author proves that when the measurement matrix is constructed by a random subsampling, it can offer a stable sparse reconstruction. Besides, the simulation results shows that when a deterministic subsampler is used, the proposed matrix can also guarantee the stable reconstruction as good as random Gaussian or Bernoulli matrixes do, which are commonly used in CS.) <|cite_end|> <|cite_start|> (Reference: Deterministic bipolar measurement matrices with flexible sizes from Legendre sequence: A deterministic method to construct bipolar measurement matrices for compressed sensing is proposed based on Legendre sequences. The novel matrices have remarkably flexible measurement sizes, relatively low coherence and show empirically good performance compared with Gaussian matrices.) <|cite_end|>, our construction exploits Legendre symbol. The definition and basic properties of Legendre symbol can be found in any elementary Number Theory textbook, e.g., in <|cite_start|> (Reference: Elementary number theory: What is number theory? divisibility prime numbers numerical functions the algebra of congruence classes congruences of higher degree the number theory of the reals diophantine equations.) <|cite_end|>. <|paper_end|> | [
"<|reference_start|> Compressive sensing by random convolution: This paper demonstrates that convolution with random waveform followed by random time-domain subsampling is a universally efficient compressive sensing strategy. We show that an $n$-dimensional signal which is $S$-sparse in any fixed orthonormal representation can be recovered from $m\\gtrsim S\\log n$ samples from its convolution with a pulse whose Fourier transform has unit magnitude and random phase at all frequencies. The time-domain subsampling can be done in one of two ways: in the first, we simply observe $m$ samples of the random convolution; in the second, we break the random convolution into $m$ blocks and summarize each with a single randomized sum. We also discuss several imaging applications where convolution with a random pulse allows us to superresolve fine-scale features, allowing us to recover high-resolution signals from low-resolution measurements. <|reference_end|>",
"<|reference_start|> Convolutional Compressed Sensing Using Deterministic Sequences: In this paper, a new class of circulant matrices built from deterministic sequences is proposed for convolution-based compressed sensing (CS). In contrast to random convolution, the coefficients of the underlying filter are given by the discrete Fourier transform of a deterministic sequence with good autocorrelation. Both uniform recovery and non-uniform recovery of sparse signals are investigated, based on the coherence parameter of the proposed sensing matrices. Many examples of the sequences are investigated, particularly the Frank-Zadoff-Chu (FZC) sequence, the \\textit{m}-sequence and the Golay sequence. A salient feature of the proposed sensing matrices is that they can not only handle sparse signals in the time domain, but also those in the frequency and/or or discrete-cosine transform (DCT) domain. <|reference_end|>",
"<|reference_start|> Deterministic construction of compressed sensing matrices via algebraic curves: Compressed sensing is a sampling technique which provides a fundamentally new approach to data acquisition. Comparing with traditional methods, compressed sensing makes full use of sparsity so that a sparse signal can be reconstructed from very few measurements. A central problem in compressed sensing is the construction of sensing matrices. While random sensing matrices have been studied intensively, only a few deterministic constructions are known. Inspired by algebraic geometry codes, we introduce a new deterministic construction via algebraic curves over finite fields, which is a natural generalization of DeVore's construction using polynomials over finite fields. The diversity of algebraic curves provides numerous choices for sensing matrices. By choosing appropriate curves, we are able to construct binary sensing matrices which are superior to Devore's ones. We hope this connection between algebraic geometry and compressed sensing will provide a new point of view and stimulate further research in both areas. <|reference_end|>",
"<|reference_start|> Quantization of compressive samples with stable and robust recovery: In this paper we study the quantization stage that is implicit in any compressed sensing signal acquisition paradigm. We propose using Sigma-Delta quantization and a subsequent reconstruction scheme based on convex optimization. We prove that the reconstruction error due to quantization decays polynomially in the number of measurements. Our results apply to arbitrary signals, including compressible ones, and account for measurement noise. Additionally, they hold for sub-Gaussian (including Gaussian and Bernoulli) random compressed sensing measurements, as well as for both high bit-depth and coarse quantizers, and they extend to 1-bit quantization. In the noise-free case, when the signal is strictly sparse we prove that by optimizing the order of the quantization scheme one can obtain root-exponential decay in the reconstruction error due to quantization. <|reference_end|>"
] | [
2,
3,
5,
9
] | {"<|cite_3|>": "ss-2190918", "<|cite_4|>": "ss-782365", "<|cite_5|>": "ss-782366", "<|cite_6|>": "arxiv-37628", "<|cite_7|>": "ss-2190919", "<|multi_cite_9_1|>": "ss-2190920", "<|multi_cite_9_2|>": "arxiv-72117", "<|multi_cite_9_3|>": "arxiv-8736", "<|cite_10|>": "ss-2190919", "<|cite_11|>": "arxiv-75449", "<|multi_cite_13_1|>": "ss-2190919", "<|multi_cite_13_3|>": "ss-2190918", "<|cite_14|>": "ss-2106953"} |
Subsets and Splits